var/home/core/zuul-output/0000755000175000017500000000000015136450317014532 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136467457015513 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000403565715136467325020306 0ustar corecorenzikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD Vti.߷;U/;?FެxۻfW޾n^`/ixK|1Ool_~yyiw|zxV^֯Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jw *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/P_]F@?qr7@sON_}ۿ릶ytoyמseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzCwS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזLwfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O %VO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$o.MߔZz\TXXw9FQ6"ΔBxRpFseL*T,Vv{mxY}SRL-by-a3&(!F)ϋ]n` *3UP0Sp8:>m(Zx ,c|!0=0{ P*27ެT|A_mnZ7sDbyT'77J6:ѩ> EKud^5+mn(fnc.^xt4gD638L"!}LpInTeD_1ZrbkI%8zPU:LNTPlI&N:o&2BVb+uxZ`v?7"I8hp A&?a(8E-DHa%LMg2:-ŷX(ǒ>,ݵ𴛾é5Zٵ]z"]òƓVgzEY9[Nj_vZ :jJ2^b_ F w#X6Sho禮<u8.H#',c@V8 iRX &4ڻ8zݽ.7jhvQ:H0Np: qfՋ40oW&&ף \9ys8;ӷL:@۬˨vvn/sc}2N1DDa(kx.L(f"-Da +iP^]OrwY~fwA#ٔ!:*땽Zp!{g4څZtu\1!ѨW(7qZcpL)ύ-G~^rFD+"?_h)yh=x>5ܙQ~O_e琇HBzI7*-Oi* VšPȰһ8hBőa^mX%SHR Fp)$J7A3&ojp/68uK͌iΙINmq&} O L-\ n4f/uc:7k]4p8wWLeUc.)#/udoz$} _3V6UݎvxyRC%ƚq5Щ/ۅw* CVo-1딆~ZYfJ"ou1ϵ5E bQ2mOΏ+w_eaxxOq:ym\q!<'J[FJ,4N:=6. +;$v6"I7%#CLTLyi{+ɠ^^fRa6ܮIN ޖ:DMz'rx#~w7U6=S0+ň+[Miw(W6 ]6ȧyԋ4ԙ./_A9B_-Z\PM `iĸ&^Ut (6{\٢K 5XGU/m >6JXa5FA@ q}4BooRe&#c5t'B6Ni/~?aX9QR5'%9hb,dsPn2Y??N M<0YaXJ)?ѧ| ;&kEYhjo?BOy)O˧?GϧmI C6HJ{jc kkA ~u?u7<?gd iAe1YB siҷ,vm}S|z(N%Wг5=08`S*՟݃*־%NǸ*kb05 V8[l?W]^@G:{N-i bɵFWǙ*+Ss*iނLޕ6ql?N/e1N2iDEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP__gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞t1Rx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)U?o': .,wؓaՁ_ 3]Q16ZYafuvrq^ѷQT},!H]6{Jw>%wK{)rH+"B4H7-]r}7v8|׾~Us?yWfv3>xpRҧH-EeJ~4YIozi:nq Vq8swHOzf ̙eX-4`TDGq G.tݻgq74ŠqBFf8 9Fk Afq#ϛa$!qNCJ4bnvB @W,v&- 6wCBjxk9ᤉ ,Asy3YޜZ4ΓVYf'h?kNg?҆8oC!IMo:^G10EY↘H;ٍVnSt%_!BZMMeccBҎÒJH+"ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKB<չw"|{/MC8&%Og3E#O%`N)p#4YUh^ ɨڻ#Ch@(R &Z+<3ݰb/St=&yo|BL,1+t C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$E)~?wy,u'u() C>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+v;6q6^9.EPHŽ{pN>`cZV yB a[s׭dֲcUh=Ɩ9b&2} -/f;M.~dhÓ5¨LIa6PnzɗBQiG'CXt!*<0U-(qc;}*CiKe@p&Em&x!i6ٱ˭K& FCfJ9%ٕQ·BD-]R1#]TROr}S [;Zcq6xMY 6seAU9c>Xf~TTX)QӅtӚe~=WtX-sJb?U'3X7J4l+Cj%LPFxŰAVG Y%.9Vnd8? ǫjU3k%E)OD:"Ϳ%E)=}l/'O"Q_4ILAٍKK7'lWQVm0c:%UEhZ].1lcazn2ͦ_DQP/2 re%_bR~r9_7*vrv |S.Z!rV%¢EN$i^B^rX؆ z1ǡXtiK`uk&LO./!Z&p:ˏ!_B{{s1>"=b'K=}|+: :8au"N@#=Ugzy]sTv||Aec Xi.gL'—Ʃb4AUqػ< &}BIrwZ\"t%>6ES5oaPqobb,v 2w s1,jX4W->L!NUy*Gݓ KmmlTbc[O`uxOp  |T!|ik3cL_ AvG i\fs$<;uI\XAV{ˍlJsŅjЙNhwfG8>Vڇg18 O3E*dt:|X`Z)|z&V*"9U_R=Wd<)tc(߯)Y]g5>.1C( .K3g&_P9&`|8|Ldl?6o AMҪ1EzyNAtRuxyn\]q_ߍ&zk.)Eu{_rjuWݚ;*6mMq!R{QWR=oVbmyanUn.Uqsy.?W8 r[zW*8nؿ[;vmcoW]"U;gm>?Z֒Z6`!2XY]-Zcp˿˘ɲ}MV<в~!?YXV+lx)RRfb-I7p)3XɯEr^,bfbKJ'@hX><[@ ,&,]$*բk-Yv5 '1T9!(*t 0'b@񲱥-kc6VnR0h& 0Z|ђ8 CGV[4xIIWN?Yt>lf@ Vi`D~ڇŁQLLkY <ZPKoma_u` !>Z;3F\dEB n+0Z ?&s{ 6(E|<ޭLk1Yn(F!%sx]>CTl9"و5 |ݹր|/#.w0ޒx"khD?O`-9C| &8֨O8VH5uH)28 Ǿ-R9~ +#e;U6]aD6Xzqd5y n';)VKL]O@b OIAG Lmc 2;\d˽$Mu>WmCEQuabAJ;`uy-u.M>9VsWٔo RS`S#m8k;(WAXq 8@+S@+' 8U˜z+ZU;=eTtX->9U-q .AV/|\ǔ%&$]1YINJ2]:a0OWvI.O6xMY0/M$ *s5x{gsəL3{$)ՆbG(}1wt!wVf;I&Xi43غgR 6 ݩJ$)}Ta@ nS*X#r#v6*;WJ-_@q.+?DK១btMp1 1Gȩ f,M`,Lr6E} m"8_SK$_#O;V 7=xLOu-ȹ2NKLjp*: 'SasyrFrcC0 ѱ LKV:U} -:U8t[=EAV$=i[mhm"roe5jqf$i>;V0eOޞ4ccc2J1TN.7q;"sդSP) 0v3-)-ٕAg"pZ: "ka+n!e߮lɹL V3Os\ဝ+A= 2䣔AzG\ ` \vc"Kj61O Px"3Pc /' PW*3GX liWv-6W&)cX |]O;C%8@*Z1%8Gk@5^NtY"Fbi8D'+_1&1 7U^k6v읨gQ`LRx+I&s5Www` q:cdʰ H`X;"}B=-/M~C>''1R[sdJm RD3Q{)bJatdq>*Ct/GǍ-`2:u)"\**dPdvc& HwMlF@a5`+F>ΰ-q>0*s%Q)L>$ćYV\dsEGز/:ٕycZtO 2ze31cDB/eWy!A/V4cbpWaPBIpqS<(lȣ'3K?e Z?ڠ8VSZM}pnqL f2D?mzq*a[~;DY〩b𻾋-]f8dBմVs6傊zF"daeY(R+q%sor|.v\sfa:TX%;3Xl= \k>kqBbB;t@/Cԍ)Ga[ r=nl-w/38ѮI*/=2!j\FW+[3=`BZWX Zd>t*Uǖ\*Fu6Y3[yBPj|LcwaIuR;uݷ㺾|47ߍeys=.EinE% 1zY\+͕߬VͭW_겼cazyU1wOw)Ǽn@6 |lk'Z|VZpsqL5 څB}>u)^v~,󿴝} 3+m𢛲Pz_Sp2auQAP*tLnIXA6L7 8UgKdT)*7>p{Pgi-b)>U6IXabPde Ӽ8Ģ8GɄnb'G ֤Mcv4?>HC78NE@UMc8>`TvZ:}O{F}IK}031NxGg`QU-պolKvdٱS؉+98ʏ,"ȽUD `̳'Jd_&߹H6E)}U|}UE}ՅG$ Q .+ >#ʢ{+mߔci쫢T"rqQ$ZoHaofeؔUoLeϊۓKy/yH"0o"kV d*RZGeJ,y }oՌo b_:*oH 7͌7Њq '{wc3쫺P^zG4'Y)?'@0 q-`z w 4mŚVTh"E.A PlELZWe5*XRPrT&t!٨Q\@aBDu1~_:7h*h^EzrOwyUUդ]%XKߒ`\.2GQb$3lSAZiI7e_}uڧ}I,E/@Ll7u%n;h㛥_iZW.(~Edkz.S6  [FV+C;<2HbQGh g <#Xq x1G-Kn*4u쭑Jӓ,EoOAupg#^3m)!6vLhEx>f洪LCCzu;e .ۻ?G| i[H ߧvTԥU:W;U70Ȼ,K-#i89JhXQ݉ `:mVy"|qגc*^!!UޕySL3VTdq\ I th`hʪlVkDȝ˃ ca,s]noL?sM a.['ש)J9-]u:]?^xcu'b?ښްG,yQ0lA $K+cZI [ouR b2}hj6mLP#l̆೜/OB+;Ex y)EPÕo6gSHcA"4yh8|cV,oYޱzNv"$A5JEdN`9&2 SK>=&awBG YȖN7.SPT0Mvdž͆>c1h+fXU?.5`-ah}REWtw$(J$Bi:kA-UFKf:ZEP SsUR3r꾳!gQY[ ~_'"7AY%Cn߾uXtBVyyۻrʕHn]94Tx_O%)/R{D%>i}h^g k.p *X+r 3"eYM1]Qɾh(/e3QZ jiw\Ȯg?*)O#kk';PD"Zy564.!T-6Ef9>q[Ǜ ~FHl{~:Rq^FBlfyp kزšaSV v5ބڗ")ա#w`k>{E̚>$Www @i@4f)1 ߿4V-ܔzudJ  nJ%*}睜 wCطʳi=ؽe&Rea0knT"Fzcn59R2UNG DŠLU݀bu eKƖ$}AIÂ5´T <(k^R|iD1I5B LM%.(0C @@6W5ϙ,h_MZ*SXJ>#*Y4oZ25<8U D FUN;p9DֱL,iMaW-plGMz /KN8Φ+0VgZUGʵ 6<8ϞZmֽkP(]v׮aqVM"@m¥Қp̆C8f2֚2!Z1 6gEWe0_Qh\QQR 3T7YT氒qDVk[A4J"˛j5cSmo6>kfgu^I>E|ayW(+G$I- ΤQP=P\i-zĴHsL$PLeV[sm9۸ νaS{zlm৯H~bÎC,VMZ?L<D3(sG!w78<!w׺+(Jga z@݅ΈɑM3va< qwc&@, scn}֭ˀXvc` !Lg~iG`3;;p:t#نuB?Aa&% wwFJg7 '$ p6A;4nœA'!0i0m/65~<=5ҦrLsF!B{ϻ e cIJ[aJc? /@Ab`- GΝ94D.] ~|W;Ѓ*pa"6#H &1-qwdj﹄s)ݝ10yvbcD@L'N x E4'A<N*v.˺d7RCn (Xdh2q?<,^t]{2 E# HKBv!w!@ ;tCv%@$Q+`gpsOTvU+M$]j8]D<㺅:]FxD W8] ,_5jY$޵Kp}L~2o7^v^Pθ 9)/-ݸmt]B0$K[aG1p` v#B}GWѩd$k$2&9_W]t `߁Pi!Lx9)LdQ~s6:^L@KH0oQ#EB)wH^byH]͠ux `\5t$2 y9#(hepDGHa:W3w|C&sϫ|04bj]I2\=+4K2,1q~ ?7ۉ丝(R{ll iOxNur!:"et^ۅV͕ gU//$+x+ GjEh J'WƛOE)lmQ _!,K8dU66u#qq±6Ⴏ9 璘zE-V/FpONysĸ\\s j7-P6Ia߀(I>+-MdVb{ EQ?%r8&4xliwv#uOq|9MJޖe`<4@w7˗E. G.Xrsd ͍Iɤ Nq(FeNjF>[7ٴ}f0LuyO@s˖VDR,'1ʁ= jጚ/A"{`˧q\ 5 86F}ZP(CbXɛl<Vd7?HKrK(1p놜~o]nGM+ZVq#a( dGM-^4 "3³J}Fօ)}ğ*gF`y0v`q;B'X,kGkF {&X/<v0$Ggd(O9q$||>m g7>KYh:'ώ y@eoٸ(0'e8(Št]ESG\7_ȇ_NO&"aVp:=a뛭>>+P/^é uv*o)Q$]\VXY@Y4H\w `N(xd nQND&l7 wviqz!lyseCD Qaf0; Y+|Ax*" ,(SQV1sa.]$7Y:{H4)&`$Aj J;~ތ1,J(K$&2[.qlҫu UU0JhڃAjg m4rҕn-fkk#, ?vQ,ص-x$HR?'pӥh?9VmežVAyA4̢a>B>0N[:wrZ# \&SJc7ۭm6t :usu[WTpu j+Һ`Aۋq$+Ay%-6 mN({lKB& )dL^8laen4 Kvdt2|bʷo@L:/#ْPgBeB u7'}* *6'TP%bBeB 6'{ޖz/o@9$߀``B ^Fh%˄nNh2B- K^g%| uRM Ubuwxg4- ԺIq5Xj`82qWEb|vSg$.؇w ;Fc$&qTXfr-@3R3ε:"']^6 }]y.|9 yL˼d~/2/˂^РL9)9iPóQ89\Q#WY>V\q*/ePպXC~"*K tѱL‚4CИBa*zhqjǨrQ` QHidhaUfkzF#IlΙAY2# !r[*,||+ƪ$IU4j'Y?UQ j( 3L㚶Th ߭bDj@|xaImYُ=5V;,zU( q6#+l<TE᩷[g?Xo'âL2w0xgMTc[+GB2C_1MoqV#`V+Gލ"}`样0xY#o AaUPBoΪM<ģ?Xkd0r #" /$y$7i>9ϒQV/C}@.uB27`k3*B7=)iV|UQNe`MvZ8GN?\َj  ғNYH9簳4]ȴ0>"vUk_KgdaD>@.?SoZ3t0k)]0Д|QAF2pmKs9R@NAaB4Y ~]a[\l8@y/S~#ȽU(s<(/]rX<-ٌlW.Rn"S`ưu nseiStsF}H=qLOow-4,?O7߭fzvסT[N^hT}r΀(vp'z?=O3;LK%OMѯ\*,T[{Ȭ^+B,Ya=a&.*ǂu>KRzZ _r ]j]քb< \Oh+E/:&N𩄏:BY p'1rIM?b1[+Kw &3lyW"R 4JaO#򪛡p͂uXB}kN8jWpg!%rU8M!WxIps))H= \^#5R F%FGϖ 3pzM{n.8$]Vc_{VN>cK*PYpZ,m2o X';8Zj`mĤ(4ȳAKҲBe}%g$Gxt ɌvyO?{~6CT馋?l\ &RjEi<3()hHZLߋbڟRB88)p+K; MulC![q-+I\IMj~{bdRW#<[A'UlM,TDH &btƀ#5t.Ix4$&eD܀(8.29re ҦH]H:w&8ZG8,^F 844H;Id7C@Թjɒ<>:K 2qHb$IͶ]=p. #Zx9XVĨG$$ ݣgLS$udPoW*%s(1GpG.:[j ZS, 6X="( f(# Ǎ\ I_)o}.UH982(0@x1dWR/Ѝ\nFmZ=0ԆL!.s*9! n 3h#XeFz*X[LoOhŴ L #5ҫoӪ%+d ΜR"^!p"J!hFIeV\mY墤,K%2{8&{p:(7]v sU1b9Gz,Ht:68Yv1d#vM+e 0Y:~,&88R̃Û U2 }q&# ǗS eSu#hEMTK#||w0?jݻTd֥=Dn^7 }OC'JL-UT%82D?̷6$\44k9]Z Rj+g.\(y2 #,{ڪul#>:IQ]3,vykZt4uLzO6%"әt!Lub)#<j#|Ri½EAw6JOExQ]rg-o:I)kRcYD阋aRs,mF(҂ +%<-_%"$'+5N M=a;nӎAUay>O;7ly+6nr󋖓ng.dd|Cz9o0k{۞7׮X7RF?"?‡z?{_ !L?iH/v~ڛv{/uw~{{#/L;7/^Wur;opx?TJɭ\99aUFX*G=~&4(󿂐o뙿e7,׏~~30S36|!$,y`Rь &|6K {vn煀o]:Lt;ߖ`,:w)0͒HZgf$HFeH!T&0ZNzvRLj 1Pg^IP(5)QO/%1 GDqó6ibXt]Eog;w*`pXrHS %PM1AHjkAd$6D2zdA>Õ҃|K' nӶI,";3O_:'|h {UPĪUisw)hszwehD NIM_G .ZRRpVAlFҡ,TP#T9DTg\ tw_Gۜ.y% Elv513MTZw(:7gJi|*v" 3w&Mp=937sҽcZqz,B=QwޝqQA&պYʋ %(6w=EJ@%!<K 4wugZU3,PG Bv;Pzs<ݕm)9nQ ]9rapaD)O~6 dw4]LN(D`" smW4ӊc33Ï]up\L*5aRY/v&~K@$I#!+]g NIϗCѶ'(hgGVv }A;ד:EM vuFW-X&K70}/a3U;:s\\=BT~)_`zcWSu`&fc$~suqXjk_)`L15prrA!X, fL4<"Ɣ,|bm/%<:Xb QkE;7\:ATJ묃N&8v Qf(Հ,)L*RzŴ\LmmjLBmyvMi91%'D6AsGDin\:J!NirC4E.@ ~dVCFt\:";DMH&g5s.:\#)A6EU(f#jIKQ!&6l?wIGgjEJ2oao 80q6uJM書d>kK'qQ}لqu?o#q Q ˈRVu&cԥC5 kH4xN*w0ξ Vwqb9'ntRףZfG}7/Mp΍W#,Վd8{D'm܉1r0&qIkLMֳ4^8vthRܴ͏}oZ;shUO" ugh<F[&Xsy[bw%Ew6 nN$+PP)'%e F0^j5G#U<v@kuPt;'c˜ x!+G‰ `Wӭ X/|@03sc];fYcj1.Lg9bV$mFkәLŞƘŞgMJMfRF{ SE7QkR00mیӷ!@SleG\Jj&]H.\gҙQZ3SqܥQ" };ξKF)(rvCh)_"s=򛷿meE} %p(`AFWrRdY O!b/9+͗뻻SӢPSuŸ} כk䳈 fcU;'7v͘s—&l0: 0^ؤZu1}+qoch(L#n(׼0@&r:x>,.'VsNɛbXL];r@={ ȥZWh=|}qTx2nQ%{DzH,eǙ^Pc4k=˘n52q1/̨7*2s,c+oͿLptj,.1Lt2t3kA6OhE|e.2z8R/mE' ,+X.`qrFUYEGVz{qu{Æ/qV@vLze.IQ\|셳 oɅ0vWُoUޙzÞB`o r*+ ПҲ}cn:;ɍ: ܀둙͆@b> ON V1Pz ֛ʡ`/f'^W}'ַfߍb~+bj?  I$۫cȱXk1?Q/fM̑{>D/ wPe DRSf깷Ry=R}1Ln8p}cà|| 뗴ЌIZh#iQti˰fҹHKYcz}rS_>%s}'sIMj1"B+@цs;˝K~Xg?ܴȯ_ʫrYU`"h~0l89|$2.5jI _po>YJp,Ѩ(kہ^2breFʌ(e >jy`c4?p~nl6Ilb̻N?606]E?4aMvcDaYb)N r⸠k XSiq.l$nhɁ5AlY)ke1gb(- wm_ |Ŏ" ܻ¡}CC-!t;_bOӝ4#%+{@ն=Đ1)R~*-ܸd+Gp_j.; XoC363C!"DLvB?{\l\<閤KOzp%ٻ|~&כa,_4*/Bè CPiJMl/T0(gnu3|[laU9` MSZ-=/rxf %jcO7kzAoD?f@ LѨd-h=0.~[wyT@0rulE(%|/\-Q?5 ]~-q20{QtA,"/l4qߒUG嫉㪨.R,f AD!rDX! W彂̙lZza|Q!Lq8Alvq:nabk缙vm& rtRgI#`q˸VKb"tX4ל4&Q}/ `ZIß _lH zehqTp$%*m;&%t_;&m;& I8S7_L kpt3.Xʷfe\Ic;Zx_ }v,J R->&b;l/1|{Ii!NqK77~.15^ WkK]B߭h0q«*v7L~ zo,[=u4産uTrAg&Wl(9!1ɬ히$N.o.cِ!:To7WU֥=@6Q *E𺭙^r]\m#m!@lwH\\%[j'2T|gl`1Dj11Lx`5-P?n͢a/=^ 2<9Z;֗R Pi{5!ScjDpm?Qfr(P#j[qR!~s@ugК&K‡3F;ҕ99$@.д! CDcE> L H>=#lRUK /fqͼxseݛ1qɌ[J ;Ǹ>URf2Sl12L3>koޠKO4b lBlDxf3rگYZ'-Ḗвj.ErsFmJ XpE|b k6ΑRoY+Am#W#$@&$`F#\z#1&= =N7d$3Tfb5ĕQԂo VTBڌΆ+ٽw 'u4_"{$(p@NvJIhY*|r8|DOx[Mׇ`J;!֠H-%돛5s912,V<ͤF,UT`2JZ a%KqI H 2IB5rjHlϺ KI:eSq $~Z[)#9-5)" tS*Hz%5 caŚRMRƤF\VQԳ:1h.L"vTr"e_p6)(pra|Yy+4A=E/)VΦ6#WF/Pead JR),M-tlf].2-Rn%1ӹEBXeH~%2hhd5b‰6lzKh_dTyfS9!y-'k2Rz1">$Ć<\4XZq+e! Ζ9~..wKРi\S\~4ʖ$/~]Ycޕ쬶6-Zf83Dy1;sd*Y*d>%ȲBP'ڽJZs3\-jq2HT}*P f}[&rޯe2wGu ˪܇謢C:@jFxSq4 J v@Z0U^s\"A9* Osyd$\ r2%3!WyF>]%)gIo% QRq L)xيAUUEOQSSHޠn߮z9؇k~4df0 ' FfKp &`|X 4#auKO `j M5݇^j)|>QQ~>GGl<=EUhd]Hk$__rq.eXs05Zֹoއڸ֪X Ƀe3&^.8 GB~NrǠHMPK}Ұ7]  q5 9k{4w5MJ~I| =τCx]r=^esFsSWH_Ŕe5H?J;5"?x|H XX*C[3\yh"(F<5-cFqڅe<'Zᦛa)plxyGvڞlN}p&E TR TPDP\)4]tՂntb>[Oͪ$D22 +ъ˨+A9-ΧPz'%nQ{^ybٞv@*<`cppȎVh}@':Ƶ״.p'MT{JY>kF L*$=8㌬-5wCFXD^xV*+ƒߓ}V< Qyb& m\&D#3"X1K{O3WG7V?e>$Á iv0s$TNyr>0]Т4(9 J90ٕ I7*|GUQ䕹 m>9?+\4sَN볳.Ot/CĤ\/Uߙ`,aџeڝPdǟV?}ҧnAoֶ3/ XۄGO)k4x1;=rkGfXȥ;=/-2cJj4d]sANT>{]F8;bMj:{G8EI#ve-'SG|V3[3c٫3Opfw Q[vxm#l䏂8V_)? .~۩lj۩HIJ| |x}EP2]r aȷU.I$Q9(i\;H$7佛Л''Ϸyn-8ℍD x;ֆ۳z"ͥ܇\]xtu'ޡ+" L { n>.hsܭ-5j@mQ 6 sQs6:KE;f&@F{bU-&$ܣ{i%zZ &34b.8N㦣4QɛAG`!U\PEZHJ!_@7BAUS3!aH!r8!5j0ˡfİA4H1 i% 4bJgr6dstywï랄sOsonOY7yLy8&ppޗeӑR?ruNM{FNa{<0waЅeM'r篞72{ۖ<cw[R_A7Mq:Z}a-w5*Є{@`1Zkk YmQy9glsr,8–BB`[zÖ옓X ">Cr7-.\z LNɞ7~O;UZofʑT\kRyu0΋GPxREy_m7wSd) `bkq(JEN|RHK0>2LaS5$ӈ[2\=-_`$ ~efy|zxS3Jcl9/CKÖiLK@iNJh_5[B#dpafNhf_ޤېn\@xm*PQc=zZO[d&m26%VھtjmUӽqFYz\ҵlρ25Mb)q!4'U{X/|c̬򼏹6Ң6[! H\ΉU5$|_(E}5/ӢR˳|&1 \/9&Y&7zV|$yD8:]<+w \Ur-)}b'Gc0`F/~_5\ۊ^5Nrq>; CuS_qAN(WU{>PK+#HO]g@ P^]8f2i,sR.,-Q6>Kĸ >~niX m~X֝)`=fJ))V:vw߿j>Ɂ&K43;uzm[~?l׫ٻ/g.&~ueߩ=> zEs2f:7'KݶĜnpa“aH0HE){6D7;;^zb~;[{8.&EL1Un~mt$-WL7ᦋ GK*j. Jr}u`7 lٌ.7o l5]emV).O﮾j>-b~X-AWSi1o j9N} 'lrBɸ i,.g5$ͩ~zm_G+U&ٿ6w8Vu ܪav۷+!{9$v%PMVYGJ~VJg[5M3ZA5binqs7=^=nZ׭OkM &hͤWΝ,/[l[44c]skJqd]ww{zj­xG3eVECMޓ(^Tf6RwolkQxD(мz-0nRC_܁7w럞Y\)+; +ԑ[y]u<ߟ{1f9;ǥ*+&&EPxB!"T+q<81:[Lr3$r%}mvJ;V xr? xrT"Ti-4ȃ6D* .? 7kM啫,JGd.0TK.}@Iߵ6f习v7w "! $ڜ6uOT ⷒ*H' h0ƍPVb$ 写ʀ1(a< !"p"#XS*gYY)]q-֠( ^IMR҇I2EIF +d \d,YKsD I]-d!AI4s$dc&%`yJ"N@]5|S:h $ ޝS+-$C$h+T4ѮHmmvIx `qDCEoC2fDR2:yBIh931eFV[ERHX3R#2+vTڧnk\O7IfF{Di 4)k[W%m2U%W6^hVgbyHϱHKwhYBkXR\P9-iK<>d": q" gGsuZ9-x9|t4 3 TIђ $)j M ^Tt( K*h2eAlK ࣮ x,1vQ\BF39$Yػ6$W}YLYy<{03ktw1<-eRCR>/ee:"###ޫ̗e ]A=%D:2D{(\P]ru# 2W fl+d!U60<9:f /ڮ0KSqY#LAb(M4-&@A[*[IoRf,bY q h%}5ß1T1yY`0XDL[eڢ( 0#slõ/t KN\Qs(J[2Z%P6cq_$EV y :ΒɂpTB\1`5 K&&5p~Hc`P*fR_)2 sYf@tbX.h(E $;&PՀ\I1~hd\AS@zR !X !`GdEhH}UTUR: xDw.U[cWeںA܄Ü/H J%[`LREٔ|+p`R 3DX[ a+XEer L#+%po]M;f %E5n`5b#JQ@+ho q"RDbkۚj{PPlLbgV 4"j(C Pe01j2a+֢ƛ ~\uXgLDЄ@<aeoڵ#e<k1 E%eA; '6uCN^d vv3lLV+\*MNV˚zޅLP6yd-$t>^x1Hu@KJ{a[WQrc00n8&8bgPvcR (ЃcF>@ $5-+T^0@T. U U:XeGLmp_ 97VTlh f,XN~T} yrx*ƝfE <-`9%^VB0NFo7Vx`9;: ں!EkXkp`(c#P. mҶUƃ"DɛH_GW* Ge5&b,C>h6yЎ2v ԁ&v00jxrI'#6cm3chdEr[<8Ӂ$^Gh⬽cv63hAqt" sျaP^o* 9Qpy%\ r).*PP<*tGTi'OxS@voتȰn$>G/^W؊"PD W}\'WV"0F% Qѭ `\ة %p #EiFF|R Y\:p.C1#dnІEj=V"5R&vf 58֠Ur@qQڜI<(0SLGjJҸh ҵ5\}057FPٗ9^y$k o7PX?˙@̠:= ڼO#0%7aDF[9>X2Ad9 L-C=ҭ_8^"@$\0 ks !$rU]|3 RLYY 4EL 'D.P,9tQr -0 U&y]]DީEF8ØeW&rW[b\'kqd2_,׆DFCh/Y$W1`K a_ڸ>t.SE.GJ0h\|~F/ HX`7t>Y=i[hѤgUt I.uC/z۬q4z9]oRg QRaU w>it6|j{ɣ}&Q{*Ps&P+S4k~Bf6H u( uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:Lr~K諭^|q} +ړ$x\һ֟˗~Z}|EW̛723x0@qaq~ĭͫm«O3]q OGgL)I$~Opv [~gO_zi.+0T ` ˉyPNiwt{xiOev~:_>ݤw0T.Ѝ- 8"'hD 9s P `3>jѫU~!b % Vu`~q? u`$:P:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHh:=% OHcP֊/ԁ$w$qE uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:Fs}ΕV})ge|]U`1^W+#]FWvzc ˿ytu?s^:&MQ+ut(]}i׷7`{6c O_gܶEɞNq&g_c??g/'t~tr.\yxfcL?LFǃ ]x,γyKyK#h<ڜuYS·\ 5'+RE}Bv}-!_]jBpU;M(mq=U-tx>Ř槳}mKq[8Kvn7ű,w\jdޥ,M{٭Pi?ӯY^+kec{1sF1@Xg7I.$v2-\LjͲ q d|2;FC6mK?ZF:ﲗȯWfا m]F'3}9`<9Xi9pF4EZt֛(Emʿ-,W,w!bI9ѶefG>8)t ]l;=}b8n/'_&?a\m;ZXΓ_K{~x­Q(9oEr?du#xt>2FOm2wk/7iqSc.m﫷ǷWbvalo:dܠűA gk{OOo_{49?}eoNGs^}Dέl4|u[J2|"8MNܟ.`!݉׮|lpmx 3`?6:QF/[K)!v܋<%7mx1KzX'x2x5]){nh%~0W#Fp ,uکZ}4s3Vtf% /|?,CeK{+^x]w2Quy٩SQN:n.nxwƛ{i=]]s~M ƺ;!+< j {]pnb)Ꝯ-柑XX 6 ߅y~e*,CbQ*{z!}؆%vtKtޕ7Ri{6XUWԝuYݵ]1λ-X|{l|u_Sv5t x!Mgw#Pmyc廝(?:{ۦ>k{9йJvtY<イ}K$etf1m8*QZ 5 ^%'xzoIۻ\N׻=T@ʛb]iSLkUlǑ_ƚ ~ c_w7`nuT5 :R#Uk"# Gܲ=_mP6hU.7fA#I#R"h"r^^cG-2UMp>aq&Nxoɯm9[+蔝NuV;}Ia+'g\TZy<'Fl~/0ayOxaf3X _q$jHT$%΃4uTVd'd^=}YʖYQSMz{) F Uڎ])Oe\qE]rRCa)dDZLBU$W7q8 |0fW;{ _"JCN$4 A,=e8t2lj|R}= U I M 8[&ɸ @DXA1a8& |(˹(ƉK"͡kH`(g\FG.(G " |(SG|>NۈKP8t*ٛ.ڍ2]n-fzKj6zFV>qp^MRV@ss~rMoۢ >Ll-W=TW`*#hKiM,RzAQmr7UaK '~yaws"-R8k户dƝ4Wj ^jόKERUOD (g!];oq`9G8,W 6oAdFt퟿/#/Ӊрڎeᩙ;VX=^T*M Qjl{mnl Z.U w(;UA!E+'IMAJ''_:1+@讇1_y[9ݧX=r 讐z ~:$O L9iDo?͡2zĹ'G)BZpvFN_AAPT/ L![#^%UNd#Z'Ny5ъ*qۜ.53]=)r9V{?Dl `) Zlj8QCY>?g曵rTV(HڔȱfJti8N&|(϶Rq o,P]Oo=R4)g]-aIWW "N:Js)O ಐSM'v' |b:i/=R1e\4 .r܌m{MZn5c˔Cfᰱ\GϭcL~>޼!nY`U=(K: I츥.! P9Q 8P)! rVr)kF|.kFehyM}9x~F׉pVI"[9Y3ƽ%2G|޶QCmMyU) tc38ߐEs%R䈱3 U eu&EU'4s"kkN bFj;;1]-'L~'IzO6F"#0&LґpǞƁ-܎P6!p]&RUH.d0$_y_43=":xtNq:fvvY ECj\9H&JT&51-(ڦ |((*?~-ZWt񸗯! K9Uf/5q$c\إJҀB{n軌Yi!ed-QWTE!XLuWS1ħmrWjQ.bg }^bJ[W+Κܚxn,Y:]&Q5=9W&8L6F*h t-h* |Eh)ណ ikrP]9 ;r_ѹ 5>y,zBt!A瘍ܹ%<$#뢇s[x/5뢬?*ZԿD-xXbAd;5c5iUnԉiCx6][ZVO!-aH ⻎d5?_?<[QДR39np?봹K9chIU¿cKQmU#b<q=z"%9|$F|-`.r%IkYq]0q.\ZS2ul\kDާ׊n!ѭ2%Y:ʉQnRNgH߰0FsqPayyկ*zs Ճg7'yu^+\pו F=cTй[x+M9/p?_Ÿwu ةҘ "%-.Euꤌ-I@'$EL8dth8(,q|ɀE[’Sīʐ1aۥ(vrBwn S5c}2 &Z!"՞ /E'0yĉ`mIY|Db(E|@XZ\>ʢ" `U)*(*\$y!sPmuUk*jъ`}w*2-ո[}y-4LM4s(^1؛{.r "lΔwo='Fu(wx|T^0Nb9bPo^W3&e%1H#Hڂ>N$N`b(xR^atO'jO~(ՈrAj-/R0kJNP#6BxUA~t)ۜKq`'ds5F7bo>,8؈suy}gQmܙ*3Lcu )\Bc%6Uo9TI *&]|Ԏ2uDAۗzs Z0ʊ ~i⒄sEiY̧'G>M~=F (9󕵔1%I:h4%dPkQP,7rTz.X4H1G;Rr-L`YXv U1 f4m<٘F r]1u(ٿf6o\3s-Z6xԾOw*Țw= n5QI[ ˎqrDp6QC.{ϴ &Xf4 uPR~nV"}4}0 xDN?w44~B1k?x*:]ߡ~5nUٟ78pƝPHs!۪1ӕ^xD|Eļ3|t6Fnv \dcFTm,Pw+`?IٯQ&lˉ1I.G_vГ'חɘpB^DhBMq54(Ŏ~R}A!a;[CvTH:aоɡI|*G1 *COޯ2&Q{|\lf{yW5Oweli`|9Vj?_ioϳUvziA98ŌC(#m(7qꓲ6'J3c?'Ew@LQ=jVur8 9~Dzb Ҫ?m<>,,iuWi5[ޏCr-lmY3{g;>FFC$!Hpw ~|Lё+3&SܰZ("zԖ2vB] Z[l! vxct pƧ0jV"TNTǘHa \{|X|vߒ~nY9᫂))!HhŴ koo`p" N 0vGh;IjK] 0̩m8WA}L`"n)ϫѫ'"J^[I(W;D;^M 0jɄ-:Lz֦'=M}~}}wЇ*C{}wxnª Tyq>s7/!'yGXRЩ23V tZVXglo ![^*:aaWsc9C}aL[˙%SwЇac0#wxTfpb t"SrBdN+ɓ_sPh+1Zd餦3t=yej,HN+#U~RLelVq;S e< NY_5$z LcV YG%+i}WhbKθ(߁9ON9xW ^.1;Dn *mhRuIZ\]䮈u+c2 >mCۭ6_RNA]'YrF[9Y3ƽzyea Tܹw+xȝ3o߫P`ٔ񪥍0DSG`HDwU5a ,+ӿ'!-~1 FCte<›ԖMUSꤌ-Il2'Ix cu=QwC;0FΖҪ {F퟈q,?km7wW5K?EѯI89f,Ӆ"ف\Si\B s\-\zU?xTU U >­>-m z#o]b2 |(Lkp T28$6B<1sr߮"ʸp&G[HB \FD]Vh ydAc=U|oו^B4BJcpO#sIM-_!Z|kK_SRt4|T7w+p1,ny - |(oxfwk}8%[׫;W<̵7hpMMg}aCg~<;s\]n-AĆvֈLolQBi=[Bƶ 6\03cF2Ac6CU1Z˖7Ux/BU#PWI,\؜%Eh P+:zenF1Z?JF zv!Cڦ|(7x_1Rd9N\ Z IE<kv[oaCy+ȸѡͅ1#x܅ mq>)qԑ8w!Cz>t=*5*щF) bys;uP74,h> 1/TQhXSL9֬xM[mP}䵗 A︨Czo\v#޻֟}?rXogy<ߞ;dgf\~]o?n]vHn\^ه+Z}ivKӣCǙ-~$&eB%7ۘV!fsDR:eL~~VLvݽ[?[ʆ%J냫 :m~^VEg ZWT|lH 7~ uy/g Z>?ͥ{jd5󬸽z\rt z&~z8Z5;)s4ԱU/\{_ˣteVnt__jJ|rhHIHRy+M8-3JYf,ej7DXG>d~N 0yl"~_l{j+(rl>[~;(?^]g5#'%Xr[Uu@vA?X:öf$Cu?p@)4H.G; d{ LxXńqsJzQ«(v/li4ea|W͎TTF}Ѿ ey#FYANih\h,w!4E,p2bl>ܷJ-R}㩙vCsd.|8$E1W[4[~Pl_sf}QRS^ gCѰ;?"1T6SȇY.u9S8-!(Z܆嶈ea) _Ɣ=Oq,x)w~$~$fN]̜{֩T 7C:W9"pUqsc Ԏ2Yyf/BxAc,Z-n9t3*NR8NT CeU4MIaҩȠ ms>Ix )A865|(#Zy7GtИߗ58L`AyHzPW=Ũ7OOpeb~I< ^ozN\i?ɲj*e2quOE4mMrՈ C*p!龂|Nn:[<=疞i<-2Hu47Q:pS@CY. ﳬ7*AcV#i*~|2=9%'3U)x*eq2h ' NJUySV{hiU2dP&s7^{,@6T&𔔇s Y6%DxSYi֡< C %Z=HW.`eH2=b E7"]$-rsd}cnyH>%%wbqpUNZ"C ֹV8T m'-rx)1ݢ||(t9LIHR;Qm 4'A7v >J8Q qfŮ:]*Gfx)/|K15AܟhX!Íd$j#ޓR8nJf?䆅 (ЍKov6*d$YPOfGҭ .85˼GeY7pp4Æ)Ie+6_ۖ dޔضqRēhjp8b:d&Ss[Ac_ TAO/f'pқAa:/PƏJn5˿P5CT:{Kx^s)N?-@JK#*52)^k9p ¥aȅqS+ؖ:Iuĩٻݔv~If-3.Qv雒<\n*{{Y 4 ;A&<鳞-hжqwViG)|Mlq<_yj4xUx*@1pAxhjcC2ک>HJ|,ؿ 5:چ[tf*@b S5?J62g߀*Sg,W܏.űGr 8|FȔwNyX" 2<%."{$$XF* UH;C&ěD9O/wx; )lAcd `N!3[Ux?Ǔ c)Fք`$JK%[^Fe@^FU.kd=R Ao4FfH3;ucnu;IEO8'\dƸ̡xasfl^ 6;*3N^N@ b1 MV'}e.~?oC sgJhsjN]_s٣09tq J_Y1Gz>_>g:I̚w0H &FJlQr )e er7~U{㻎ThoKp5?z[ W=W;8jg<8Ny l e87F}Rq'e֧Ch"-s፲Hc؟>9l~OV{nȬ y!FhgzXg٬O@MrdJt9EܸEe}g[^|^\\eWu:y.XUEUpY]1ZCmabǦ2{'>|wUaϙ?zaRlyKK6h5L )uΘF̉4WW'D+/Z^ .y]RH ]Hۻ #v0SN;@iAwIk[2{ wȾ쵫 k9($P k!')E:@mɵ'B-4p_@x';b%n'=Tqb{#o X i,[BBeȍ1*5p1:4F$9W#1>\oѢb|;g`n?p?߈N)B%KǧDkD#|žV;IBԒOwp}IH"f܃ڵeayt8m C-^pxOM( $$xP'N:O:)Q"j]Hz WBѷ ܂&QSp- k֢}Ɓ4l]`lvxz5jUi,^W6=/"oCwR;e =K"c*+̱b-"xC:/4^d%.n5G[*6i\"G{+RoQbCLrzv;kl5]wq-7? wP -8 :w| KHs"  m@BrB-rS@g}0p6͙bіq¥D)MYʫOJ=E s[AhrwEYXW=j|5tv/0/S-v~iVWFpȗYP7)b#-ZZ .i5/O=iY 6L*護x`aRM SQgoXۂ==D2k٪`)S0#v@َ`vqFHo$ 3Dz , :z@4."Q'n. ^H4 *̲Psޑ6@Cwd5H)#W늇4hȘ`1b~׻[D+8`Di\DJFڀz3Լ%ZAozM h5KHTi=,feVpHOb`k!24;Cq^sJE$d|P3n!x[93 +c@#SoI$4b( 9VS {N@ygf@NOm<;O%^6ޭ̣==[` ,Mb\JːP @ƫ$D\Ɲu*[l2sN/Wo|#DZ 7Nc/NAĞv6Q! .z:.'yAc"=UJ:w>M9E)$Vv/lS~;@o w֜{%M1G&:ʑኣ$ M葞"RD]30%/q\"bBU֥G/4NY̔Dy>I5E,RNԤ9>,7q Bk N\D$b H+}E^,k z[@)Yjji6H7s2x6$ib1tMea R4.gqFwNK|d$,2$/3(vm{,_QlٖmmӞ nDY$z`^! %rT&#AT͉Dv8 Ee*WN(`Pa_o>^7ʖpK`sjw-xsFpVBěwH.ߍ32=⶟4ܹRh`fWrLռ:eȈz?]6~ޫS] N9dI0 Ksx 7Oͣjqg~6I-L%&432r^3MA aL\\nU|WձEW&~ #`qg%I~}]t/ 0͑$i=0l3%bq|_R я-YŋaiMq 2JsMD7.YXPP5lc7R^fsIzzX?Ek%;8 EsԷL8ՄsI-Pn2)(֩Zs|w㟥n$~/aY(.~% =_i8H[^ Ak)erM`EpT}]YjrcKHW|X}mk~L YJcD\l]#RhL)wpp?-Gač&Nd~T@/c;ln n4,U([Խz A5@7\ fL~>T#@;PͻY+瞯Nw5(޳t7z}ꗷJM\ps!Ϳ0>+FÎ |3*sb';K0.BuʩR5f 2.8U_I8&vZdÕG<З1uOq0KAwP$:26NTr7`Ҥ0hsɲU&x*d^?Gg!ot3 PWaz<}wkLoӁFvEn0*O+rhko]in|M~"޾]:wh50i\,ꗳ6W&kkQR˱"ުvH5g֌UY6OϪ8?˳pPOL96󢧓`nv?KZf[ zT*ChaZ1[hdC5=ޢt'\76$-7s*|^v/\ܵ4q젡۲WW /c\KNѴ|ąk4J=AN: '6w AHr\x$+}Ej'h&]|<_R]8wc/)>'pIРX-MZGSwqBJip7JӔ0.N$m40bs`>l͸tK=[R`o3nY(xpdN ꟧э(*c̿=K!1ZRa2"RSy@̘rD[F]HZ s$e2U.x/g[% " f C=2 :du$ȞMr︹YJs$"@HK)Y80Zjvxn`"\jM}zxtr?EX0lutk1Zۢ۞Tա>J^YzW/s:ߏ4ۅ̸߀7p4o7́nEz':-D2gGgG9oGf.5Ζv U8uk_tcז!jp/t p{|ɵ}H.~dTeDuh-^w琷}cRW;#~RN狀3R=B@%; )4x)F :JMQ9$NN~ bDIl"-VJLZ]đ؏LI^ӱjȾ~6,Vu=!Nf1_͒^VoqJ]MhLůH>U?_s@^iLG{Pjl@8~3]-htbC}aimlpaIh+} ]QVD'6:s?*+_6c^,gU2tLxeȜ.>7֦xz1kv7X:zS?~Uٖٛ]䬃Pۨ΅t2,jA7g"ٴ.=ؐDe1gU80!d+rO/I%Uۥ*\EsTx*0YlM Tb!Kc3 &OkdH>=oB*v@yuwi-ӏdž>/ʼn U໯̟P^gCOp5Xn@PI)L 4M$Y5 TQ9;X%Y3wo:;~&Xp^ޢkྤhRJк<ڂ ^ y$28o큳kl jY@ٞwm$rm4c}D|ZC3e:"a ~~;WsKo,CVɷAaI,^j[?Ǭuq(q 0rn3a)Nb(T T:g396`m/ro 7lu\?d8/ZEFdˠ$6+,׈Y 'r8?ПO!KLK @ψ1 D: -X+Wu ax-wu 0<alj-"VfIv5cB&|3zso⮠-R\{+4BQIC٣{ ϖ$);H{\$JN?kZ *Kec918WW%")KR 3\rhq납>Dj]E3o ÙN3,`3 8U⌮xSǥ3H `Zݶ)|:u4XPlp5ϼL% ؾj:y_o+~]H,`<iLmYyT,f M0)ew[U:̒-2Ko܊]XC Xo֫>1tz}7ӼJCÚ# L1H# \N}TuVLق"LEJJ 2D)grTf]t~Tr-XC7doAF`-Xq-(eH)SbLcƑd"6+rE4ت|y!{I]I!K(sçj~&,\jݺ),bTt)4x lϫ^l/\3DTq]18 ]AJQ&ѽ֨*W/;NbȣŐF sP$E;ܗ+rٗI7>' g=HoӜmĸwT` zxRFpVy%*^~:m<tbDT,)3R! ƀF%YƔigiq3- ,;s492*N+鈤bgnuQc2y߷ЍWG J`3ܷa“(A?jG&4\cрIis8A&`0(3L" 2%bO_t[!xf2&x]<<}*w6heۥ9ε\vPjh.: ~{xJ/uA_ +]ߝ]tm'WI$a[6 A˛ sc+ ˑVn4W? >~>y ۈ7#OKp/{fՃ9jGM؜A"<ئ+NM C+@&]gP<qM}XD_68a`h`ԏP QōKopArKS 1h/,( g[|A/oq-\+RΤ0mGSC J&dPcaOyhN A1pJ"$7z)Ļ@*n7oNS]BϗN:ۀ{_J ٴ<8\$FS;cBOٮZ¿OS'w}45𓐆Wѷ+c$߂KK0!Li"5qXȜ&iNRq,B,?)Iv컺оqwpa[E[soP RlCv_̛gKDE #ha$~4*'CU srsK\ne.%W u[is4Oox.톟ir2&t|gm@wpBtϜDzCF;Q^vG=Poek xLye) zLIyE,i*i#XA_\6pޜKիlNhYrܴjxNȾ5pN> j{?+}3'Dr\/ڣXKx𛢬 x~"REE$U@t !&Lt8g@ʏu-$"3aghCXĘ*pL6 xT 5`op(ֹ"S zju96/M-^5tZ4LiPP`:PfpQG<GQ*l-6HD˒BDg%)A1(G$pGg<4u&b< 6{63Tq gHrᒮގw.*j:Z3.tٻX˚1"qGQ_~_gi4s`evkĀp,L|a-C񞇩fb.]J(}N- 9> .5Z-}pBR:h$'b-9$9+x=ܓߜrpp AN i8$hi$ƇhRցC^ȕ@o/*eZ|AJ BR) +e@%)vGPPB:cXF%4!g4SXD.{=K=~b\qAђpStC֛t^bSk/ꦪP{7?ϥ(bbb<%׶^6-)0sNJ8UASRQ{DkjI|$FO >{j%U΄ƀÁ(c@D+`,<1~bP_~G>0SL'ag$ <^uH N 4ǛL|q4LкdW S ]?A$1Bp˂@܅R%$2Y+U'PCюbLPqɸ1uBCbCnsz[R Z^?^h`''G0mȘX`*~jH(.*$`hjbDf-bM#18Sy'笙u z@bX";Ir^Qc%c )0rq,B+KE$1<yb.HpEĸ]nC%t 2~,R˟ i!'\ț)pf DJj5(FrJF$}:DynbY2_ތYPͲA&tDx-{āSn߲OH.Q abdMs +\Z(+F=r2;h$'`:MyRim,}]0.*uJQ!]%s~ԁlo[#ԛQ>Z1X+8>2K3Q1+i`0FO t𷶠a|,B|v=Ȣzvz^LFF:(f nDC =<P 2gc'zq#4D؃WgWp#*pqj0 J I"!3aq㘉o_%Ap)m9a.|H``9)7P,BfdP5# u(AZՐTh]zUIo< Hw=<lꮁaΉUHS) []l?i$G%/(kz>.#H&22Xv Ǐ[+HZ렑#D.D%7b[!:hxzg4ڡ4NQd1-2yJ*1ZWד뮂PH$Bqx/ΛTƿi86"ʅ+ ıyMe./zԸ RV:h$Gb(s^11 3007XG+&}>ZSD61 4E]pt5:h$Gt o!;XAG8dps@Sc-^l|I~ 2DҖNmn bA(M')N"FV; W%^ETcE ث.(P lH(Sj3/Y`s aG޿5{PcflTAAjٺۺI127Eku T9bKi$'Bʂ'O+P.Q)I2HNL{A#18 <%,{d9;h$G&SL"*?rԧKH1GeAo4҂#"Adzk)٬^@.(Spq8AŤ %@FjI%F˰uq f |.> +,nG)TrHP z8ztH N茾.G CVB0B=&-+yn4I z4C)ROG77>U@`ꠑA;hGE+# !ޫr;tH 5Cp PUQJEvH lWmvUBXnHt峱ӂq'6UA3:h$c]Rf5g:0 \'  #/i. <#T+arODGnJ(8)qz{W:4[^wS:˾(Jg O_?w "$߿o> ٷIiz4>ɗjm'C6RVd٨'v[BcA§~82q羱%fW(5p'=y]*rZfM]SM|zQ_4[EDپ?8[v}śGN7zC? ]z1Po*w 7A~F+!B"="Q1y cnssϽO TGP1`wN|v׬3^{ g+Y̷0Sѷ'DˋqU5ޔ7xHR@B\H ˖{+~-IO?[N_g٪;Vᬑ=".OQ~?2e:7Vsα8krbC e{>ks4PEv u\L|@4ex Dù{Y|fqK~cDh<17{,(z*Lw *{; g3Gs%YgyΨ=SrF;ˬa9JSjKCMnJt.A4ځ+GR(#+]p϶m=AwPqQh^9o&OS+aZ|,D*# 3JNO1Vp#Ze}+J%,/y|̳Ҧ7BaIVs#WE+b-b9n/jbv~3anV vKgǿNbwnw/mN]Z)*ǵ9+ lP.r**h^p+Q{hׇ ^X] E}0hT^Q j ]57۫*ޫۼ(7 (0#/7[؈Ea[fN~]]! Œ|j(ٻM[fqQ>"VǜB==ۿP V43Ų-?`~?fdʗqғP݀"a mvmM|[lz\}TOϲr1m`ճ9[ܫ3/lVhW.LN9RMh;_ͮ'8s=:fϺɛZ{eeoird?g,u\r; !ی'6<3dH@ᓯ뒷) 8R]*Jb_"OTR~FIOx.ǟcVl1qLjC>)v43p\,tS<9d@KKZ(jJ%H E%8*Ovr+Pݻlb4KQOֆs:׼ٳ-#Sj{ ٬F=->m*oFSaF__+[pQ;N U|v{ik|myO6YʗwC@>lnc5 j)FE4K8f~ڂ3c`GkCg)DH)DIPS%f^vF GPf.<~nt"Ӭ7Tn{8!D~Gj1U}?ac>?Ϧ-~P?KٻFn$W>o![ 6/|x$$db˖e[v۲ ժ/ŧꪗ P5'7W)l\5Okְ'Jœ ]ϸy/%c**Uf^ₚ/^* aWE]\KURf^Ҕ ,|2_v'%fwo>y#TPoE!<Ŕn:[MUʑ>xg:zP<J/ x#H" aUfܶTme~E+Tս(mb9[~!F_R }^T^TB0IrHEe O3< ~ XFY㪪 Dke2΀%e{P<RGXpo1Ң)1iz6HMsi&Q8@ҠQaC$/]}&.HGGUN$To6{P<Z!1D8H%񬔴5wg0x+$Q3x [#yPz|!*vQaugXI=wjVШ+fZj-):(9ГnPf^ljY|ZûànEe+vw:KCd#[7knVFq|Nj+߱d[B.}7q8v188- :PVRAaո&2jMVFmy@[`&`m3R(ֶ&_֤FRT,U$RL*~>DSoHR)lY_91**BQ{ PU=4!&&hI E"xC9!Yd蚇o)9K^sަ•ܽ" .}%x{U=%FL )CB7H^=CIdҍbKFKΕ=癡=C$ϥmA|%Նсh4sFVYCO U[VӂIR@.Z 5NgljoL֖γ+IVD(`1xC$/m3G IƕJRVcOP 5$G3$1,TQAYɬC$MY;;Bb,MJT=HVۡ%C}W+tH<bɭr·VEm?F*jߪԪ*WAkƐȫiԘJꭊڳjkZG3:-E+ъG_=9.<6Y{Z3Cn  k87T`V<00P^L zOBC]`#BA-g#d$@)1v%ܷ x5m^ [FV.lhSOK!V{jl7haHV!jNS<8=qLC$TՊBeFL3%7sn"TC =C$To93󙐜^e 2@\0%!@j>2+њ*"wĀs8hG!jGRtDQ:9WA3H^3Y|pvѶ(I$Ʋǫ"xK)k(N["U3hƥ}EH^0 g/Hz24u|q<=_9ĝ*-<&޽=:9Bɀww'=諣ֻ\m~O޺+ۻ2L# l4lr)*-s<`i2.DB\%?b34]9!F9Fu|*_V R$ -#YĹQh5,!MT6$n;M٠ .Ώ&3 n+z\9z<|{ Agmyo$)3ƔWiKBW:2F,Ș=0 &sǵƄߒuމXF8\̔EW*GawtjRwtFv[n2`3t ch}f7 q3Ѡsdŧ2 q]Ӟ+4vC֜r_kגcVߞ{s־]};a٤`]r f(НMK[:}xoC)^]>N@w7b+o7_l>ܻ.7Ltu:uSr8L) ډŏgCv4Un-o7m'A23vج(,mmHpKl_ tDbx<|C uSA0-P!FIB{/٨q GGԅck<;YxO̻u6.+ʧI*3_8.&t| \Xnؖ2&bIIYQC߽ðӲ.o'k]Wm}w# nfg.\-.z0\,ӏo˲p$\ݦ ڗk~3[< ǗHǻ'-Of**}<]'H!w̝6',R.{Լm9yɋB0/9WOYwAn,,"c|3dDMk1q|jn&#0ziḾKwP.!o;n< 6=_MLm1#YM2R߇= 1>%"p(Y3ER9H]Ef ]뿾~yQf)gft<˛lf4B<ˏ++ƓIEZ; & ~S}Sxh=;㵦:Lok6k9OV\/|=ə:[ n%}Lϝ ň$9h!,k>L/GSѨE\y,bĆ|VxF.RH"ɑem/@5ѻh (V@0`\"ŁerJ.:0ÔX9]otĜmfլ#Ӎ2`F6<65F4Cw1O#B\*[^mZ<|rU϶f\S`)vfZY0OViYla6ͅyqNO[i o.e0qYkay:0'6uRfmExWWoz2 NSPC\)!S+8 NS+8}~$B Nv NS+8 NSq&|yd|yg}jPY}%vi0˂is4yA[{.C!ry (,ov@h9r:ڠtGyI%s TgID?6L g>d9|$M31@-b}PspF!:.HEjDi5 4; (֔41Hg5E,c ap)  I8FҎAFYJykY˹s.nq2u 䓛LZ BbN*l@!$rH=H=2uPm6=6fy:DJNcRYfEwZI%HD0 AP$@2q6^Py 2B2=SaU>naJTt\ T[)A!C1”ZgV+p'5Sv/MҮ! +Hܒk9鰒MP;`%.sI!8ŴAsrﭑ&> |H9a+3.ܵo wůuNJ~MQn,e/;|^O?;t37u_Ѣ O=5 `ZSMaz#| o_KRnbDO79v݇m,ҩ0k};S{ӯsu%*o&KS)c@+,tmw`A'dB]aP 1 h8r ک顤ק0Ǘ<ݴ]~*' S'Օx:wef:=u .^ [dh1J8 :,_MUkt\ a>]] N[lBQrTa9O0w1b'B)OŻd|7Ĩ9N}cꥢ~dz_,{z*O|-{]X20UV3%K&këv5>a.t{T˦MA`Fy#qT7*)l5V`>W0_f8xyb hܿRo5e-(2n@$ACA6*$\AbJ~u rE9;E&LH5jbMhN@5קExtyUxit|?q-DS &|bsyՌ-ӱ+)zZo_+w8[[^ Y݌+e7 >~L_4jf3\9MWJVǝ\ꊾjqe$5?*?(N1 f+{VBc un7ߞ|m?_}u7/޼z`?^E;.$NߝyӴq6hZ6{=%.SN}kp-@+~_v* }:_ߤ`TE pI~Wlf%\S3ŖH5 uR}mR?s7Q歱8$O)Bp D H1WJIFX[mPv}nX]Y>Ќ0 K?&Xc0zf\͚wFNaa&aFSU(%E&Krd3k̽2Z fGmFo'ltzBӬA mmFѢ!f"kTLtbVF.t:.$9@uym%vYE`2Yr\1%^@ İxU!xBV[tTOelFj*e;Q!(8v0#ACp<-cȹ~>i 6{ԍDK@0Wg$˨1F4io^X]OžwK2aeyY@(" @E\prXpLiAJ Gwn-A$őp.uXELUKQ25 D1Tn-W9Ps?e`178SR-p*ڋ)K2G26NNxY{|斠nѥ[[DŽCAfg]vw|Ih+8Z7dGbRV ~)#Wb6z+0a.,< 1 }9/4Fs}Ni9>;ds}i9>Ni9~s`90?s~Cq;uZSc,0JSM6C@l}-~ }%2y z#(wRW:߿;PS9a3'l͜6sfN 9asP(=ח6sfN 9a3'l6sfNd9a3'l͜6sfN|bo\ֽu1k %W$Ӯr[pQix`,(U2xi23ŃvWv5O'KwvmZpKf;rV~۹gF|76`*1"DI4 W9t:JÉV1T)b+t@؀dd~sVk/;^s%*۲9EG膶Z4\(zOM\a}:/ӏ*8dT ° ]NAU[=\0΍glu0E O*ulg@D/w.{ ^Q ׋fHF0h> PxPIJ/?ώ}яp啩q̬ui'A=:?Vu;;9/wvRP;NߍJ6ĺjn@YjKh ;wUNs+\H+ 23zt.~ L dX½ *@S'=QѴ_ŏͥ:h>~:s70⦾ oэ@q9x1)H:<9'pc , 7NJs^QZ{/{a[Mp-~GIA *#*)dF |, Dc2YY-jl2J.&omS=Pܥ~V~rͽG5,ٻuvtX\RR1qV{tY"iZ+IϞvt5 T Oji8!U XQW\ŎE]C央RHqrL1; X z4J!9uV* ;G7Ԟ\Us LPd_~z4uy'Dv-76\"\ eFzg eҨgɷ?ZImwra;vM{66B┾]/@=U*3>dfk/,Ekǟ$g c X-V Ɗ+؁\'mIodQ1.+eji*UΕ9~zyk$²Ϣ`C7odT^ގM>k.򳱹n&J\<R2dD<쯂#obyH|qF^&d&N2E`8 Jd5zJ"ć1B:֍w$l,l{ERo]:"iֺdu,g$'/u-9m1ܸfl1SL"b2DW k^gO$`/y"-?({6%؉7;l%nо$e%]pBM+ / "LBp^yJټ6/G,NR=6RNރ-R)"B9O5c|ƨ`sXc9$Ѭ~j)9_ v M>ФiB'O#^Ee]fRF .Ou&w0&%uT-\P֩v%MRqQZ'Opţ4c9`k&n%TkBN+U%:v4ݨ7kPX !ݸJጪ L{p tHT:yt" iZb]KRY+=u )÷=L=a.lݎe=:RJ?&ש/z_P٣0C (SY4CToe,}ڞSD= 4=="9"DJ!k{Ŷg͛u*96YUa.hnDzwB7#9AfԲ=mT܌J~`Pu n;+I]QW\ʎE]j9;tu<ǣHʏH]%=uUXUVCWW`NQ+AE%݉=zj2_,k@G "?%y6vh̚3j@VѮ$ t(C Ktv?aۿRؑ^x^h'$_}7&,P5b`٥3k]kjmAƊAB8Q^k9:ʐ<"&,监+x4Zݣ TjthGSUE3z4*KԱ@-;xqzJu&0o9o_'1s]¥xX4:~b)t儁:j`ߖ(EEY(LX1 )w^ #RLk[Tl.6/5LdL cA>7{*KG*$*A뫹LН,^]HYZ}#*ƽ4VI;&ڣuݘZo/CYuD-&r?}4dX?%o: Ui_U[ǵMe~* }Mg kF:\!\<Ό<=XF#lpN3/oH9,(@%lO$ՋF@(,%7?]2!QܽtK_ ذ7'^O؀` dm_]Q+w߻ORBÅSFj]#"r_ #ޢՊ'S1pcbId\5Y忘5]4o̠ h{؜k'-wrU{nz/z0l_ 5X^exiXgefZ3!Dt 4,P>5YjM']z'ƤSl9G_%Yз쩍Lh%P(r4[&&?Ғ&G0Fa?h_[gtz)y%h8 ŏR|./CӑJ {eÿh*[T3\x5GV9gtv (QfT`D$|J_GKV4ނ^4eہ^A;큆Q&ǔ;h<τ,ǹaZLh8xxTpd99&}vaʓ)1%P"!&Gl2̙w0l,YpLZ-@' 0EtA t R!f]L1ݛpnP3rlһFQ*"_WK?y]Jx' H B,̓й! \P䁍m'Exdo)#&Bj.f#wIsfP쌧ߞ7 Q9nf%;;:k>9:_Ex)co&Cx$[ʭ3\ p1f44ӎ92瀉r NVG1㾂nt杝pӻmL]|6bdNSZ${mUl'r{zW[Yqu:Pll`]cB)7j9d.9cdqR$$Ouj#gu l^ R0Ln [ڤ;3+bŅ;Q3.51-6'/裑G xr9:߿TtZsJ_Hʘ ̦ձJ=6J!f^4 =brrNc*1c $7 pDKs\ o8x'Z{[WgrV;mױajCݬX8UHk)OW<əcLU1<.8#uN9!i,#oZuvZ%lwܹ7{ ii)"a UE0w48Vr^iolki8Ai OAoT<-a" բm-(,}~9+fEyWP$u8) YQ'| K!A/%(E~ucXo1B"2箇LI>XCl"[Kܮ6n@0#Df2+՘Rlvq,XS2 X٘(K1VkR Gc4S2רRr%&"sRiX!YB Z@Iۦ`Zd K4."@%Ň[`\2,{nEЙɱ`p= $0X93:CA 3k`$΁d9$PyBj;94SI',s*)<2 `2PfJ3n@"AH&V<;6P}w|^t[N簄U[Xa0%*\ TgQ$CHy,b3 4Fi_9ȅԜ/a4#H\9,s9τcQGo\dTY&`l=ȐrvziYAp&L8_Tx&oJ}WB[1ڄnf`axz\` 28͓'ZC,?fz;*!^VA5Xqb9G2D CBGRODm *|i GɿJ opUnuEGZd\Z$_Bxv4@G8w8wcrUŇr0~iw%雨DIg_eVst՟YIbA0F)rRQPB*˦ `'.|FcdM㧓Z1TKqxr| n(vSEbv7?^}Ko+At=痦[vnwjYVSyaM/jY5dy5/,o@~o+vL:z8}8MJݵN%ma s`P I/OjNz ʊcd*'Ķ߼? o_.|sU\w~`q'/#V BFjnUOWMkTM[j!lWKK}u^Nٶ-mp J?$?]~u/C2|9]q mS9=;1]v{ b~UqS#5ՓEȇ03,r'Re1xϒwGdt;d2 T BHb e[AäioON=UAS+$0.EhPDfQp}D`,b1a*M0"8l:R :q'eUD*#dG(o"VCͷG'q1*hb#ƨHLIe-"LSު^dNYBǎzد;f=-_wmk :pj k1Ŵ.ZJ=RcB)R+()H\% F\pu]eJ(1?(VNmKRq*\qjq $^"Lk%j$GڈW+@);vqPrvWP\QN45W PW "uW V.(zqwZ~`BI\BqřfeRhK1ňR,?kkYt\#2յ*->QPV '1zt 7QUumU+kŐzysq(1:^3Y' sj#\V-V.T'qŕJ:iWJhDEmUjKh_\%ŕRU6]z#T~]Ɏe Q5lfM0CXs]#I%>n5K$cݯF2:O)+GG4%ZYΓ@`4z>&\zXޚha:R({/!=)JtF\%DE\%J(9?W(O̻*doMqHPd<*8LvxdE9Ot[<ʊ1EǸGqwQ*'v14wNtqg/eSPa`9 zVܬ_¡܅Ak̻0ơFQF,+q %,@2Fϴ,sJÃ&͈؉Ȼ%C)K`RGQJmƌ6>fimw1eQl䉲Pk<Y,f!Yn53'1beP8 #kd$> .Gn$DEP2zh^ECr``3W .uW noza9g=wՠo޾=kՙ6s{E|şܱg7@l oŒ~/V~ r3x`2oD7A1Ej]Mj|!L++ ~&u7Ukcf,`2az!t+B4 X)twJy>'|~KL{F _̞?DυF̴Zm5!&\6mlL1t:~e|j?l`GۡOC:I*͓  -!ØO,ž޾Wqae~%n6D ?ΊdVy1MeqRIg5~Zl˟z`a\jrٔkg:EZq Pi5)erI K-NeC9Rg3l[qi tqCC+#~{ ͧf$x$(vnp=gn MR3[F H4;^Uۘ7T-d!D!e`YX1c2b=6MIZ"&]7n1srΡUN d-oxT. 5\?_n{{-[ mM'<3nT:):~ Ǟ:l޼ް]U]īW:aineK'κ&]^u bn;!JgEpT:.Owj壙W%3(uj 'tHPs.[=*staOWx0%@jw^)R}~mmMzIuTtT2t'1P?Jqn,XLA)P~%k!󒗚¼%4Uw|IqӵeSQTO}3$ςw8YPRRo_#0eq­8Sb;{RjLzdǛGJGRwЏ*͈32bM0ݕ~@Q,ķ U49U SfG(+ T)Pq#=cY|%rP[Toi+ܿxv@o.2T(+3Ѳ(_ POӫ;(Wۂ6Fqšw11+lzQyMc3ά (#7wy?\<7V]yzثx,4Y]xprYBRƐ(:\J^y kJV& jTX@B^^D:Rb@)0pZv | \KLR5lYxϹ5Yqٙ;a=ugқuӱ`inEgzUXr)zZbYj4CX|Uޔ.9"XKa09aH{G:FfI|@%+sd5@BFS^G*HitL*"b̰4N+! ,u1P0qBT%Z*Ĭ̺GSJP%zO !r HjQZ9ԜA ~\]|K*ĕ"Flh r+l3 ? 3}r)0EBJ 09Ǎ2j6o \%s "BS'3s%*[7"vӘ1$m~c@ .:l"Ml?pZ:+ߍ*a0^iatH*Jp# dwwivcq3&ƿ:_FLcxltM.DDѧ\ ]^^ G!ŠO!V(M5Jϕ8w7yGWQoU44NYOcx}pp_>*fOh!S9َ}1TOƶ$j;b-B2N1b2K_taqWO_c;}E |ϋSGB@QcAp,H,Bq奱RG/)Đ[UTO%iㅑJmc}J! KpO𖕾y^WNL廼zhI5Y!$12jQa舴'i{' )s Lgdc " .EQ8R<Sia|pэgs4YWH :q'eUDj͔"JFc&=P-|u:st4~S[a },}uVRaa(=nX36o7BK+yJwvto3I0ȑ D>4lp s>)8c޲“x]XBxEL.<-N;-v<FgCXk2ˆRMux*k_o~oijQ\ v@";HqPvv#B,vM}gvLa) Hlˋ|}$nQ$vWޫw{GޏGohrKB^d ]> ^ e_OcT6Je)A^f23ŃtkW\ϼĊ{_]=/]k6`*11D-8LQ^8W9 Bǀ\`8VQҨ3`k U cJ#68R$#Z{1[,} *۰lVH'Fkjβ\QijDM0ߺI:XZm%5np씹(sZӸ5toH ,?H1a}4AIQF)3ZF1#Hk#탌 6T9OJF%ybCYl7طv]ʃttq a3ކ^ M>݄ao % $si[v sM |n鋵}a84VMcs N\vŷ=[|Wm01-#b/mڪ%\vHHH kXH:j5jjGrhg([Eg}Hp!-ߺbp۔]DbQUf]En֖Pʰj y Օsy0Z3(7 *a*&΅;"S6I W-NwHmNsӻ@5- :B3حl!TћpWԟBWwq6׺mnfVy>qnsv_~6M#+ M ;뗭\cB8]rrjmpv͛0&9},4/+Wvs6KEKN"|ѶlGdä6|6dE.D'aYW1,M*jdEn_[0WbbZ jz טOXK}y^f=m+9%`@F7~k8⋶da'- x) ']զp{A`Ҷf5y>ΆJˁmt>P-P_tNH)o޿^ 4GGv!vMH>+,KEjTvV״WͻuUdƲ ߥBv ~"n_u9WUKt"zQ!'pttWi1$MieⵒTrL21+]T1幘}2!~Doo^ƣin0|}B0~zZKu,r?l).Cl3X04 fH$4]ˮq)1xNSĭqQQ0 pFr q-%T8 ySa'KI Ɂ_pCLDK9 Q Ƒ56ͩL>T w>:#<pnYoq_&qe_;t" 7+ Z@U1(j\ Ih4 bJImF6oU+)Y&/W}[; Vs .e]oG;+M_wT;_%!A?mo ѕ@5 Hjؿafq)w飒k2-c1kA\7`10 8crH8+ 0_Uu91A.PHx3jg>tHb9Z+I4S#9}HfΖj=>?Ifz-=?'%٪:SA*%B*/ 'b/D{0{n4i4˹!}"(^y} |򃙼}@[p7?7`MZ/#~l0{ W"z@R~ h"@Fz)Y.3/;WxUb# GEA ?_NBxl-_]F ,f -E:n1/G_µW4~U>5?rgjWx8)+{UHn=x7T e{=CMwYp.gK9y~$388c!i@ vP4cK2Y;5t5T@@ቔjIEE3 If 2^%V|:;E;)x6μpkgM \׆c%]`vM-yWEW0>{M\Z(Z@@1K&zG ˌ4[WW 3oka2 gV*l;u$%m irNm1pȆ9Fo]0`KHۨVW #v|X73L AHRVa S띱Vc&ye4zl5BZ"vրBN 3}x?KO8m4>ծKϷhp~w'060d)j74$GӼM[qyJփ\;)6Z&}F?iI8 *P:멹Rͮ#o]Irmotb,\~f^پTeErdc㕚ÖFj(oTMt*\Фc"x[$|꼦2}˹ y*糒|O=; (5בbǣ ˚ϳui_ InoeƝrctuLY*â^CoSWOM}m,ñRoL9n^g"0&Gg8 P\yiK$cm:$CYaS|HMLc1>ib M7p6n0In{'ǂ8@:σSƮ3oWwGJY:K c+hAc`ѽu+¥&LIX5FҎY Lʃ Z/5^#+ gKsrJ;};I&K.cPGa@)d Sia?8膝`:p(|͎::ךqH :q`UD'*GQ25 D1HC7nUqNko|H{#>0%0 NyȜnKu>"?UnKB}tTs_֚V1u#5%> U0MgKѹ2vAivA`МE4~4,ed֞^&7OÞ-&9& 8{0|\iw`<U }tYb 6ȦXٳyұTtJ`0v:8|)A|uh/ St#VzHA-#`¬+ x4Mo|lځ=3Liҧc{X+\StD$4<03mCU%_N0_ɘ94>or 9\űPsiB\C4{Ғ?MnNDC?L3[]Ss~Ns26̽G +w ;xe''DJY!}ѱ| "({g[ dM7'D@!\w`!$?|ِ,ޮӡ$/c%6]EPTy}hŁ9M& KY$d"iY"MaHS<+? ,kO(/$zan{(Mz;&!/Gn%$dogčOJqCй*cRҊwiŻ]Z.xVʚr"ȷ> ˳O\|h>u4ʊ~;)vydMZz8t?FWf^_˵VUKn-BM{yy9w ]uL3R II+,;φ2)O ays gYJDդLē2SKTM9:-R0=TJŀ0#QZau9)Nh<5 +Ui5E,c]Hu뎮/keEw"D"`jsRaÐ b|a=Q$ӽ$X".5 ߣQ7Id ]? tV>_  N[lB rTfMɇ}3>qP);x7_F&yt~\~{""gUcp.wKa!?t0I[]65JJHUDS?@N܀ UMҜʷ{z Mu/?azS~ ̞F A|0w0no RmH??Ls77f8' _j뉳ڞ_]]7k2[> GLTLR|4<=Yy{w;]7"leu6ɺ^k PiSU>ῦĊ y8x>:'DkY8(a'4 2b4D@~L?DfL.bO=iR;+33IWޚ9ʼ5G")E5ҀHt)J)\k tm{FJ%[ g^@, 3m 3, Y ψj16Z8sO_T-PXޝIzNhb)Nv ȣ”O$٬kQab6`{8]ߦ&+g3_,sSҍ̈́9뗴=Ywqi]"ΦR[Dv7~?kJ;fk07Cl9X04FDoAsߛ}C64R&b=4ZӵSn4%〣ad܆ MgZod =!"=A0X{))J7Dd%0l\Uj[q dlRs3&2[^۲΅k_m:_y]#MF`".2P-'! (*y"c ['eϼ0z{%MV}dʑ^Umˊ*+Ǿ7:0$i|PlUqLqXt(>p10 f1crH8+:AC%auUC[ꂿA\(wp)BSϘ8P=x:F"ik$opfӆJ{V뤭4S6K*Lϳ/m l 7.e)6O6oo"6Y{Eh~J bBA?s^,K_R`u1d2t}=(&G4Xy7CU.E=2!cI"a?(>6N3 j.`bO_Øu>3M~[Sx n@'̰2mQ@G *(qtB&Ni/- vk-'k]f#I[d/ l׌;†q_nDԯy^ib*$n}`M>,njfGwt*NN)A읡QEَ)w jv4V2?A9:8;4eCw#mN&CMq+z<7AneVō=5=f4jO]:K&ĝ4m1Zv0J؝ o 7.SxB9+%hբe7~)]zQj ZrƠc4"=pd*|h|;t;m{U 3z=t1B2iLXaTyxY il(nJ7De|4whE_v9"ތsU8į:Zg`3&ʹ>V(,>/s4O`4)< t8.SUs;Y l7h{^Ӄ> +ev%rvPhj]&KZ!D!e!x0k5f,`ZFL&Z iq_5'QooNc}$5jxT~8!,$S{+%R:Tc&_fyv.kuP" bڍ̬!Wώҏ@*2?o&j5d>ZRzPG VpiGm7Ա[/cL.^rDZ(mLɤU4VKq/^x<< (c*0D%$@tY$"R:"s @e_E.qVqLe r]R_d#ma\,=o|cT GF;)? Nr,~߁щ:f4n?$eUtbƭҷ?0}iUlq`~l$[M~u@\6aR/1",DDꥦ0+Cʘtm;JI>)`>pn}Oy b~/%2̆,7u |$3.w*wkFdp[fY5XQY=_sg1ɺ ^zh)C=Ok+y~`E{Z.(yxwiIwѧיef*S=*Tv9JGrKļ ".DZ?0)MZ66uZDIo}=ZAnyYu(Ɋ71oM||^L؊v~SNm06_OtH er||Gn^g2wl>]\8)ɘ9-fwXCD9LY$d"iY"My˚ rmAKar,Ϫc\_0;#rt!7`k U cJ#68R$#Z{1𚃽1V'9v41e$,:w? $Pc ti>e ^҆Aj j069sݟR?&'ϓMok_em:{G֛y /㦌5Ambd o~:I&ՅL*&T~&TBt @[RG [YftDg;)zLMo0)xW^yipEZj<cq%ZXg)mڗkw|/c'+32*9\~9_juKðmn`'4[3\ܬ%W7>{1;"vMr{ W_NUW6zڣѻ,(C !rhmn@!oG6aN#@3w.?wp5)a)/B^ѴwN#ԞR"GQxiNR&46jZ_4u>t/A*띱;!a:](ct9+=73-F{2Lf2M"唜tҴFm m"fsN5eD1++\4W3e;6Hk39AQ!p'B*{=tOY&nQ!v5xQ (*l4f ɕ6꿗[e3}-o.֐~!L 8oiD%aW?s!feNuzT}2Վ#jQ~62M wt7u=G@.IFlHT$Pƨ }p7q냊`фB!Ik|D|"1mD_5!> 7fDHԘc&_CyR|? N!sZGO/8>IXH}Dhx|TX)̆f*O 9<8cNڀ2K1(]n(abg{Q_nڬlP£ey oU٬~QFz8T*҂ N 1 X@TaebJ F(elߙEA8Rb@)8FҎFYJy>%&lȳʢ^iC|OÍ:|Gݩr8{]|]|}V +d2PyMTs b/w)EY1|zt|貃PZF4KP!^WxHd"RLm`N*l@!Ćr {#<& LAAK8gvEVT:DJNc[XbE2âҌ;$8"` (O G9E,ʢ~ŘU^YO!0%*:Uh R!. PjQZp,i:'eQxŪFc#64x#3Ĩ' `(,U  vHr0"^F 5eyr;f-!@{ AӷJ |zl>X\_wש$PƒYLdD]ƾQ v<Ȯ@?E 2yUlW  N[lBrTb r=rż'' EZ>95uh9/<w&"7_r)t}=\ 7?0 AʝLYhk^)!oK7>ܰҌØ|7Az$ 殺'].nV4/F Az1hǣ7l~톓h0N~X.ood qj&*gb|LUӐifX~.a2@0A1id|_z0v89M+g%&f+XJT><&@>\`I\ LkmH_Er ~I&$G;ɑy-_5I=-JEYҘfwYUzT)oJ8XS<[YXdǯ~_}o~q/^-8_ ܖ)_6Pe܋C%z ߯~}h8thaj >.#Wn{ >Ҙ\ ~0|ן|HM.V&u&Z \GlwZ \OwT!b&3= : Ax*/̢}>ڧ0ݣL*^sԣ$Ӭ %@/Pݟ&Ho SyXDy #њ ƍhtSz19g]`%1pNc&瑰#Zk}2g, ϧxE7|$\.\281:w(v`.94N*GDwTSoMxbiS%BD,WP&<I]Pu,m '+ErZH>n,ҹ ؕ:NI-JݮNPpmu"vҙl1qVz4ܦ.7g *ou "E"RHW8L`s*]e-n`'FzVLq_(A #>XD/3BDN H({Y\Aß2}6* :99͢b1J&ijb (w[ %QBmrpk1`o? ;*љ$w:xr\<~z?@tY?شytG ʰJ%k2>9.h-D<b\pJ'Sh7.cg;ƟJj]|fs13~ |-]d ;[s0׭ɚ*#ywZj e+zP!ǂA ;[J؄R؊})C'];J׌nwy/b=Iril_5(`00On;Py|zFsT^3A%DDD02W bvz[e KB*ƝŔBYϣBa x"^Zs.$GdG!aX:d 1)!R(S<(`) k2Jx#eU|vhC6-;I$CJYK"2JZrQ=Fׄj;H့sE@ꊠ?wq?r|Bω_~n/j)Uvzf &٬LQz.zTn%ۖ93ׁoY H_|] /EE[jˇm O5ɳizzn}KQ.{?Wʧ)W^e}RoM~l?ލJ]W?ث\*^B-öueit7z34J/OO%sÅ Lk\^-R͂e*$;뫯{m4ܢ3@QpI%9_QH5zYڟ+S 0zt5o?M ŒZzD@[;t[idt}g$"_ӫϫxM AJYݘ &Nõ1 &eF2 ,Z#ՁHOS} v2Bf_vnv[_B7֓S䴳FFC4R9XV[\.it*Ř[I+'Dku̙3x4MMlfΖw;kxն)4b;7F>ګ?] &YLeB g IѬPk$'[}?1l?fD,0e8r943hIsE@\4:i 8m5YU!1438HgRXQ娀3'ie$O+Efcd[T'oNIv1TMNz˿S-q[,mgɾA@TԜg//Tc2 Hk_qXCv\H56Չؤv"Ӛs,ZEmD0#[GPM"vQ."Z,o)("M'JUI2bNzcbPÕq`(ڎ$&zNG4HBmFuѠ?h:kyEaƙQA hג1CP Nd"Z6DLp]އ $)%(;PH\&)h=|4M9!*SeG|" kjA-XT'sǜf*:%b iM`WC#ZT>Zʒ)czU(EDb6h"hrup:"Kcu2vB26UUfd μ5~3hy?"xIz?^lO^NfmR7C4!UNH{^儴W i?*Fox$Y #Pf'.isu|Iu.?,U\Wm@-8RBO},Gzb\{y1CV&x 2qXO-҈Qƅsa=76p9$2AB,<#:j>1ϔTFvU-tJѧLJ_ gளd gp~q>(E>F)1JQct[vC?zwe{ 䒻vVt0,':kv߻,߾ 7?a>&^jw]o7eAm.9Qai7oei,K΅ӫy:$`5J Ȫ3!<+ ov]VGLQSxJR\>\5 f'˺8`uBB13ujcK6 n;Ӽ jۆ2]7ިytZxd}TB璉Ap-YP[o:eq}-b̺IMs4LNɋvH5 ۄ݄1L*6S>q`? iŃy]Xg"@'z,Z,C_]l^xY /EM oK'-><|.Z9ęr4?MoV`[ۻ5|..{d)b5JZ]4@L %0jRyq!B@"sIp^ Mr(BmOY$.%#SJAij oTDD%Y9NcPȸq+hª-?!3Вke 7.ΈBt @OFv|?{ƑB_vd~TsrA~5~TK\KBRW=CRc$GejSSU]?iAR6q3N@&C; +ɕˡ? G`Ft RԖ^B':E汏' ,%ajCoMwZ=(%]nD`?gl&KѪ3Hm Sup2fFM$㶟N(nOgU{f聸*GLdbtmSjn:%B^Blei^菝Ox:8[ yk_vwwfG;si%V+rGrjc[>8u‖lg4KD]u5w7~8/^f'l9#e8ʇҝvI4G?ꜘhIC"6֑@]=Q0}ZYe$ GZ?ӳ1Gud먂]?!FmyVBŅǓX&0AQW;|M_^qM|?Ά {?~{|۽~GCo/D,LmS0z)x \??lho1ts<`Gnݥquqkcvkp-@R~|Qz3Ʋ "sR T٪IGWh$GM\h8,/ճYћ:ӓ(*Ċ%3]vPǻBEum>$JKf j_\kmo="^ 1yYy;uE.6I,]&^k Lq+$ ^&"s*1kƉZ?=x?f?9l9R!(2 F &)B9'TlcYoݷgdZԿ>F_#ycɐ8HJN\#G m $ IJ!2&>Im_mطuݺS38akDKRd2VHSIj@JB1:~n ek0[C6v*; )찼cHtJKPy"*K:2Hi&ԧ~ᩄ&2͒U%۽g<\*;+rq xv> V/5U|ߧ|Olܣ>yDL۔ 7zP@R `Ίt{);W^|#t71+Ce^Js& }BR77)ץ\r]uYTI$C$.Y'Y8)bȳ@0jyNXJN}owvs3~"8?m[uS`qxϒgt8k|tl*>}UfNxu\Da%0􌉘!8F\K˓v e2•1m4Nwnpdc4uT%CjK0Ρs刯}@Z]bbqa5KUs*UQ*++UAWeD A::pW[;U[[KGz brT1yeL'(Kd:gDY& qsȜn/n si /x>Zs 43jWϴ&$ۃoEw/^}n14l(sj`+i.Ӽ@*Uk=*vDc}i~o2r:w\K^{7r›mۭPVwn4U>5 }V.҈bq^\K&d#BV=L|SeܟeIdb̟ Wx{P7_xʍ̓t}]rˮ\;'U/oCL2,w)!EEf)99A~sNz6AΊ^K'c{קE^cf(&~.Vg˧^DRIkR0.cO;Eh69ljg Y2 G]XNN<]0HRS9,&C`٫MYrAFL.1ْ`\KϹI ;8uJL:kI67kg98;iPtfh>$ AHQ&j.LhY8s̢] aUIXV@0SeDC”E-PبiŒJ,-iYK.^z{F2,gL $wY. Ԩhc$8+Ho8IJ kqZKZ/iw)iOZ@(n(N\UZ)QHkuoȾ3Z#> G G;'GQ59؃BǽD*T*2|YZ qzu<9}yqG@ A&T̒ېgNЙ J61>3Y( w;Ǔ/?rsūs*2;hڬU{_W9FǹvsEŸK|/{y }<_jozuwᒳI^;8~:su ';ݦiп kM:XQZ_ZWM'r7tlsCIauԲ^GK=5!qF+-b{o6y7t<_.S+O}M%k.t-gf:^wKhimYBK |G 1 [p#0'Lv\U~Eco5M^mBW釓LҥIkMN7R'˘ӯ7VH_߽h17,flF8#gQ#ZjtW-~nхn ŐyYEѽ$t޶1D7D^JβЂV놹;{/ͥۯjKht;kd1E-gqv2MK ^nz$PۋuΠqp:>,ZdKй_[wNVӴwrjxzS"3,Shc |DK R3ݡxyy6Am~~򝵻)qr Cdio4.1qLH ̂V!$xǹW;'BLJx`S >a pF:H:X) cV !`(nB dSBz釸K%FEB{ B^M7Hyֲv'^ b{3^uƤUܸ8Б6d:JC?"/SEJbg՗DqRIȸ*)e}]FJ*|NQ=ZEi'_z UnUAnaTpJ,ŶGZe^7Y U$~8௿v{&owCwov:U]R!Ew\ZkVeHa#,D"Z)S,3sYK!lXS$1{ }98DDii.#f%B@O]g*ÍpwE*Sr*SRigN&%5T0Ib(2sfB4rCZ{[EĂu$%}9#0ɣW4J5XUv.Mhi;Ze-*CZ*HMB;$Ux!q@[N9!TbE\i<,dґVgYb%Z"eސp=U + hgeÕZg$ZTJ mt<[s7pEn?5d; _ݛW[!`#,X BT 7"+"_mb@4DK2d+wQKL,&kڂ%N&u17I^MH`wVN-k5h-]7UژTwqnhfQY} EDңJ/4s"xi3:do˧yk\& bC3L+l-J)p ̎gf͈ճ$ Jx4kQ&-03AoeDR9ɾI2ؘ9;Znaq7,N -蠐MdҨ"QUȪ 7O G~PH=B`-3 w :d"Lh@2iYC,:.%%n$0ϝ gHkCglXtwd)+{}~Nk9ף?V[݅a>sg2 YF{`s+Q&@"1 Ri§ˆH27~P/wYj( ld?{Hnp^rrNb`6& d6VFj3_buWKݲ$#KYdWUŪdS`KDwZRp0 AP$TdB\y䌌B储s`fa+ aJTt\ T[A !r`8m!ZgV\8Ga2uj)S\y,bĆ|Vx6\"JB]?,TȲ +HM)o ``44.0k3 H1{%ڗ21tU^6zP@ nWyg솝Ÿ+~?EBm .h"?c=?Q>wV PV`D0T\ ŭ`Qwa2E9 "tQ6? N[lBpz!B  nA_91goN?|").THբӿC.åsΐ_4Պp{!M %z6QΉT&ut3W&O4uga|WqE,Usou׽-]pu/*g @kii8Go4x0 ؇ڟ`*S|=MٽMNQ >tM64@l2i9ϟ#3]] LJ0#[:tO>wOӏ7?ӧ7~N a )^M_CD7rCֆCSZ6g]/q mNaܿ\ ;<|-'9׃a uRP]=Wl \#{EE՟W'5'ݽE>r4+ U`%|4Jђ ѵ h_lEWDݮk;Su6idTa+iR;2Kdr<;GJQ&otԏLy.:_gF3GzHDV<µ&Z擞,9(0IPVͱͯG^7^%` (QmG-iE!a5X.ЦKHڨTv2`ԔEe\2E\uu#+GR* bT f\n|x}Ͷn  U`h_WAs͌w\?e"OdY[!9E)q $6-(8ŭްd =!"}BJ |X{))@%b"XZYhXʉk9'G /?6w?i#jϛfW6J˵r߿vV/y["MF`" C#Z8(L.&䉍mYx[Z{{'l[ldǥi sI]ڠfXM `7{T>w )bЫP#G|q]1/f؉}g` _,֐ircP3)S<6ԹjMTb~ⱜJH0r SQ?y*EǸ.nqۂ'7ł9 ˭.":F *{6:c=,r;S: 9\R)@å O=cF@ԫ)$8ZI}FDnyrsi3i;>m" dϋO^Ey>G9l@GYvVKQ\u[u2;_qzht:x"+F. Lۯĸg!۾l4ܬXf?,+vTK/ ZQUt{ݜ;x;UK:}8,_V m zYy?5viXyn/krOw$a iU3PS0 TYGa9AQNY2G6}!ak.,cYޑi8XG!1z ",iP΁:KFbR$G2̒%-K!%%B1 Bp_mQ{b8yI:OU`9H )ۿQi^SN._+Q*/Fp4O`YwN R",DDPU5eDDL ` Xy$RD[ /;r8>ïOhQwnZ/nf!-X~?=+ịUh4BkM̨3hQ9b0X~Ý :ԊOqs> Ʊ z2jP'YJ#eJi~B5}X+gT,~0.yM(0M`RGTPJ+{LX$\ #l,eF}S [9XBC!fH rVk/;^sĀz 'l`wTخ~F'ܷp.Y:1ǙUsvL*rr”ݒ[ |Vy_0MeqR2c)_>yhiUG#`Bє"PT`ExxwhQI#QPm`^G&Y9f'8<=QQoQRu} u{4s"VܗP*&oۻN~Q\LicZкj]LZo_tf/׵WZrۖwyݝ@{r;?$sO{$u;:^§|tYo)9cM- w;vr7-r9 9݊uϏ,Q8GA 4>Ghu)ɉ8#D&g v7QKȩ&*)ͰvWPx;A+E&-hQQwx~1:Ipdrcx8S¾E.ĺɺ1>5bkEK?Ϭ}"fĞŽØp,H,Bq奱RG/)!!Zj:7 #Jb #52(B;XNvn'efk쁂{(p뺧 |mNDIL}UaL9A6>ZپSZ"F(ђjdA:5>2u*PI e[A舴o (WKM)J;f2pfRHzqyֲFΞ7*)I $Y&"TDy1f*M0"g!)l  o-ICFN9Vq$\@KeV8e*"S2-dG(o"NCw iQb(dL|p318bMxI)b(48 E%f;!yNUZ>Jm6ޫ4i[e3(߻ޅo-'ÜZ [XcI"!tV^@2272 <IEѠ3$AL&ҝ58lP8-9&r6ȹM֭ "Z)"V?"D4L˴ 酗tS:6mUzߝ.}H͇`V}P@O?SMP"sJj|B9Sf \n  ֗gubVGw6*xi=`Bq 8$rs$@h':&rxo ~蕧̵9j I\q7M l{%ko/wӷvk;x"J+M٧׵ Qu `n{K 1iI])Zzu#NQ`,7YnNRnd =!"}ghp%J7bi1g;Ja8҆ܤ> Ȫw.S~,Asw{ nSw{D Zk(lTa>bZ»ry:rYY27Hf,:Kh\]YPPlM22Q^m;(EԳԶMny"[:mL׍ֻj ˝.;pW[&|REo4X4@Z}|,ޯO_nrt[L'&f0 iXjQCc oc )'lRWHcҋ93h>:d ;sܪ FT 3:Ɣb'WcŚR*e4ŢDFEp$\l+ @ I8FҎFYJy>%0ykY"xcY7x{1:[gnL. ;_fg}^W$n"+5ݧW.jeGsweu D&b-q6 i``/Ffځhh)I\s 3r׹`ZEV AyH T*)F$3,*͸Jb,#DJ^LXy腲rΐUY0%*:Uh R! L`H3J+/Iw/hO5H\y,bĆ|Vx6\"M1OB]?,D,"皳3$}ѹ,IߥqI0YLRZN21g|z5Xp Pd`QnSVNI (so5(} ?1@Wc0qO/̚wd5C@h/ߩ}|nV;R1$qq I23_d"lf \Ȑ+@4qiX:$3KfR427P,c\@1FBZ[dZ!Ay@nܾ@9 G*&.IY<#vu=P=6W9Xen sG"&&ZM0B76\>dtWyy)*ϸ@,ݬox}c2H1Pd$ͭU:[>$9\!r -r)h&5C<]H_z5 e![kAis9&:fMK"T}z`ۃ[ItfzϽo8/g.KPY!፷mV[Dzz|gi3J6=I{tsАpT~~TfIM'Uhã7\o4a'7 gëAl*/RҌdw2(:o;cƌ%ENk&Z ih ϛ~|# 7Z慧!D "xoZS=7AXX# :l$"jxT>s>`.0 x# T)aZJk tVh-3Ƕ7\5(1Gva۽.W)z*Rl+p|XJ7Jɚ+!GkQ2 |&Gm7Q/cLsMpŧufTDj|DʗEu_6MiGP(0 D%$@tY$0R:" >! \H☁P 3\  D96TjXGؔ qt=! #$$5pFBb%#1!)DXҠq0I%&pZiL;`61QcbE߲qA֪°{\^WkXV;JT ]{UIҼބnuр5Rc l93Zk8JaK> h[O7&&[1@|__?KlK,Ul BXYLQUSFDD b FрG!eLD1ctw&6㖟JǗ7\,4{u>7WLCqxZӘG"h@QAsglrD`p'$5rx+y>G.UݣH@׺xՋlGJHSp6jtt*\^V$T/* U{vOs{%1H˔Fdι6;BV^Ux z8:j7lx=H'JJ[>SM0{6[r hqV\3ruT:a-?6: XzoVƊоGPuU#U)ȞNpaF)nhrEep'w"B*X0ƄA;%K!ģ|)b3Cͪ]ŮP'{i}'ZBI%4яn~vGL ei^Eg m*ۤAcXT^gIvc NsE^gbN2Z0$6e=LBR$wIu}f*N x&-QNJ{? Q5nQ1')>1jl!b @U<ںrM0wEoN:}5DۈA~[mt~>.mpEgL N7!iDJtY\~c.C7\wKI՜Pw_ Both%i '\!w]#$edkY&9WR](6`R9c6([8 Ø쯇ͻMی 03FCD0Fy^I< )t ]U"fC5y1F: lp6HFZ{1[,} *taSYmEY$HO$WȡaQc}3g+טᆫ{Ka+@V2ǃ%dG!jkґol1D@*(5qMHF IQN)-<H>)A2zRy^iGu= hwZvO kb . n2ۃIKĹOɳҪgӿ`;rSB Ħ khtR%Kl-a%6͒C0KHmg='c5Y^R[}Έ7R9K1s,CC:C}YBmEA VQ(xmALhD: Z9tӹS}V%sYג)E 01|se> .;qƽ1[k]i^Yֱ5c~vɁ`ps{6)5٢tnD -HiO1aW٘zZh|u֓\kir3s ǚ}(=y#1tX2ǪiwbĄxO4?56w&$b.HF LÕg@ܺMBdĭjHu)"!*Zf8TPp ^ F}H4&ﹿ3=ta޳6r#W| .md `|ڑ%Ggs,n=[4ƶ)HV @>sTW JG9{{lϟ=u ZgZN5QmWn-Ʃjb5),]ltU?]bz$M7Ay1Տ*׾Ѡ1I8z?}:[~co,(PԻr=\rxW FaP|PLYh:bDHPyzu*c Tdm24D< XI@tU"F11iZJ%1,@"J[#beX:8QKkGkih,_cWMk `S}ZרeQtCר ́'Aen \#PAq BiK^8v?Gۣ*p&$?!K.Nfp95+B@W%g$ f ?vs**Col؝]e(mkdW2kWw{ Hml?E0.N*BM %~ q76"/4Nn^s蚎y2"eb>&&Fl]g{{*W)aTRZorWÛt>wysN3gj7y/8fkpQMW6r`͹ߌ02دūƍH&c~* $Bۻ2Nx $ )ٻ` î2fͱ 휨*CZdWE-+pߡr`pO^TY_6؟˜uKIt'5E_b5+T۸Ep(/ego~co) Be8Dg (ƴ6ce H m 9QR wy'agL&:mȂ^2@;h _B *yE:0EdJ$q[qS.>Y9%gH (2ұO>HB˄j c~82DsKxj4αʙPtyYqx3׋j7Ѯ4d H$Y: #q_ҙ\vJ#{L䂙yp FZӢr9\w/M ,K=?M̖ᄞV)5u4M8 (@t%EMDֿFH#2Hcks! j)cQ:A"EfbJ$")QyCvY%īGmH^W&<ByNk-St;K;/$b]-yRt\Pn &kέ,:gmg˞Vڬm\!N`o"Lގ3u '˽ZD\ᬣIIO7[m)3kP6 Q兘PJ` 9*o̩E,BͦaBeLXό$:[B-q'SZDS*K#:L{ Q,62vJK'^3o(`zϕ@KrL+XJBs чIwz1B+ yt#+Unlk?Z+z'~v8|_ٓK79g8 3?0~Y*x.Q ))9 *.v!Ă2S9V:TTͮ  -f3Ҝ Yߌlk:OjSV &bB(ҜWH[&}r~?T$cBSw_5nZ9J>b"Bn0A)FnRXT@9<hMM>[Y$y}]ѼG(5rwVX]ܷ\;`ɤkXnx2-]e) esAn)LX-Q0Q,͏iS J{z5qtPM]@l :mqFO(g \܂ tm`~ I%GxσT)pEG{<2c{ Dq2zuج?Z6nݏ=fV RqAUV,xEg&GU+e4A=*c/hzW]l;͊Jsu 餕{u4j (_5joG Nun;~zm+ )F"4@|AJ͵$1乗A*ka}o瘐3vGoZ\ƣF\TJ'c7ebM*֠t*ZM!?un]j/)怞gB@wV5^`PEzT5{ 9lB[51[YJNGIg` 4AG1 ڦȴvQYIy"gBS؈…i9kcLLT @ ް1p 1Y_qvmyԣwnw+i*b%-0W9%1SdmdR CRhH+hE|A{wPS7ICH,y4UbB yNiF+-BLd P\UpcPeR]s0bU&uП`T'/dZYÌ Q8!:r nV m <˯hZ Sb! ''QX$rh(%PtK%f iۨmbeTo^t~ο'f9f/^H9G8=ZP FϘQ\t東//3k&aL9gZDE&Yt]*;=; dhTePг[rv$D̟(޸h ,LQw#V%oXO^DXTa( ]Y]\f 26|?ی_6PCڒ-:yGL?U\||s{>!8\lV8'ScLKMms8|& Σ9U;&?L\n|]=x;/=HI OdmyX9#Q?v7.b J-u#Mh=;ϏtS7 FaRqAɟp*S|Oz0f;^69jGdI֍Z3W٨|4> r:Π3\υNq0Nt*'tO^w|/Ƿ?cϗ|~O gpV;]w?mYcCxӡk -U|&\3/OӫZ3$ w~}?v?uJ)/_éq7 \]Wl$嶯("TS|߄h>_Čf|^`(ʒeu6Z&5`.aEyYcRHޝ !SA W`ONk⎪l"5xHĈ@f+C5:B)nκ@=M)c`J1T$"Gy0:h0X*t/m$/h #.qU8OӸU ɒY/|C9 # kb(P\5F4ar~9圆 k8n  %R, $sH)QMEPf@)I35HC8ȫ"B{b ')yNUtvQtd]g6ͭEmrӗƍN(FGτV()5έȜ _wPry;9J|# !gox̺~[@9GŻ5%r6bm,BWV?^Zl8ϳ,:ր7lsv e8[Q@ct]_O~6LD +POP`T?oHQC#1z'FPH\RDD+ڵn wūK"ÜJ$+<ڝp_SiIs 贡Lw%KR*zMmR$vLq`.x][o9+z۳i-)Qa%M}S%d A{}d"}R]LUrKJQB&SFOHQe ShAX>ћ8wh y_K{ Jl?"G-eO;MRҽXO ןKu^-YBw$j4zEKŚDgoFB_VG) :X_7h=ʣm [-ϫt]]l#Sw]]|Yd=8~.VQ Ҫm],S]a<,|( A%S>l5w*4bNH`PʋYceB*R>[^kqleCntXR*2FTEJdsf8$䚲чueM4mдԴgawT ^UAnM^ =#wD t.~<]W9:_#ΦtÐ 2ÓR1&A aSZ@9 j x{uu$$%N b J' 5\CPFr6D1K@%BVL*Z A"U*5Ifc,"؛8{n^6g//WӅ7}"~7Wm#tӌM<__ݳu1agY|>}sώ C6P(`@DM:K)W A|D҆edS=$|(Sd#+U"sߊ^geb{\{5pr(o)Vf_1xOY+Qu*pϦ4kx.ե*zu*XP];.m[qMe21dYzi &DئIc,j |TOG%${;Ĵ^!(KUG,Xq6bIo)תob'$A2ʢ9P) 3-CR}SnWAz!6kH7g7<䏼G]m#tD8/FDyw88sOy,<:q<U"6'l>cPdȅ"uCh#ai*cX,ы":6Ylz|p@es/*ilQd2?8^ ູH뉐VN9Q'𡏪;PM (gި@3T9Ԫ;Dk-{Wi(0Ȣ( ̤FA!|ӺPR1*ybqŐrHIPqO mVx*rhY@m,V 8b1o.uG]nݬ<tqG{vK'{4Ά\hۿ]{˛nb|=jTw{뫻s!/y9h/R~?S(_ߺo)J}{leOsi|s'UQ|6мhfp-kLP+ mEil<4϶4;;UvJ udgTF Kxt1P k2ۙ4m%n(#Hk!.hQD #3ga a"}K=&rx6-{@%J2==.D68^49"2 PZH"E, 蜴JFLV@A E70%lxe֛V58~kRLm")`Bd z֕30uw)]  J !H<-XE 'eE`ؘHcA&w5CfmB:k=c6/1K(䕫|D"^yWb,AbPj̝ tXR~^Y|1Xx*H,(1: 1Xó%j!5e'ӠccsʕP{Ѻ]B[5#4( Ƴq ͯ{ yT9РLUJ6c&i])h(g@s:)gve+B*Vd.V@d@%~)G<4qO20衅AGiKQ, P 8!MZ.23 ԯ"uQ|wx2s>z3gg^#'5$cONEqg`qKeo>Zz%߶-hN6)qƌV('ioGZHzvj`J( PGgYt>ɗŒ MBsqm"*9$zQ-i\S^r]0S3*e2H8)F?ЗYlH{Ǩ?m3]zu<=B~Hc@XGWz֣]y9}rGѯї3ufy ѯ_NjվYeZ.1 e ]T87qƉl+TҐES+b<:p:0{<OM1@'3q~mib@L7@QB9D>0O["A$,!YgUFIx@*& V&7`aIEی@$xd@2,VDH%i0kU"ȘFXs7b8_i7A6gݓ/mi£qF-w|ͧ ZtBȤ|6:AQU bY˓ %*/u::+MgE=5"EWVS`dz>TI(s?4/ .`cKũ@P|@x>t(8nMќB] '~.^Tl]~?vm=lΚVYr=1y:``0/ʍ&N叿yx>jrfXA$*ZH6 @}I. XJdRvV˝R_Y?!Ee&H7NibarNfkoܭtR7 vɝ_/ayz'_yiql{@_2}@g[ bIIǢb@goFB_VG) #¿|Ю=Jyvd.YueEnhd2}2^,:uJ蔔V.h벅d c{CQ-aVaXsFK^L%0Ϛ+VZKGMĄŠȌ+wv# 6ؠc/HǂBX PɀX'5:P*R”=%+&3YNG'!$>t#@}gWm{C%[?ٻF.Hz-pKV֎,y%yvfχՒlǭcjE1]F(G̻|yF_wuyyblk 1i}Ş.TsW: QGXgFIC͈׬BPy 9\$Bbh TZS>f†_Рl*X&鳰Ix"B'ces;vBov2 c,bx_ZGݤz>fHfg^}9\!'GAz!R(P|50z@I@4Jp Ҕ#IJdUI1H2/9LYJرwFv ~zr<ge$kCO10gy$zg3^)0*u.ZNRiD_` } no4ǶUDH1WAL;4P7l`&&Z0 9-U '${;1c % l%{'R.8 $F6]FpI@>@B,:?Bgm&JI!t]wEvWBz'a{zX|FrMfAnFuNo7ݕpo~m{g8\ 9jK,o ͛±Y5rFkݞZri*$#-wqw:|s(ݛփ |g ۽nͨp899$|-#YZN (l^ vl-gZ Ly`a`]W >wхJNc} RwaQ.څ1ւ"XQaע`D*,(Dj]D(V #DWSRN8xmvԋ]xz r#iqLm߶y:g?TKG7<}:<=/he1wy5 yd#Oiy0x geWFn懚~ͩϼ H<oxq0WACgo:k]>9h6)l=>e(Qo ;[Vd<ҤYLxfoz.\|j{*{fGmHwߧg+i{RgYV۱hYf9;[.g J 43 R*#KB"E7>;.5 LYV/Keo&Ex¬ 5A58j T11hjr{Wh_phxQ4l㫫t]hهy}&|19^L뷥>d}|g{kZu֍35sv9ns"Af zv8\,ϳ5#6GP̋7,WtgN #c}qo{^Vު*F]ib퉫JZ*C>`u̺^m6^?] omOfBWlu6y{0Vpfa%P5/8q>6(/sku]GR|#o 324^隦LݰE+O/_?|x{Q&㛋qYKO?hV7<J<åE_i2{ R=L=DI {o\wu܍dz5ZX=]ieT*4xa}ZacP`r(+{((+{h(+{0(+EjU%" :R$  + 0Z' NԃS&+/|v/Z< 91T}tIxmF !ѳ`ZtP ky{RS\i*3&zDx$oh׷x_s:p?ZlǦ{d>BΏst&5a+cNpM㛛h7.jMſ|e?gϳYu?Z^rZK<Ռ=zGŜSxf$#+ k fDߛT0:066)GaNb, 8pT 'Wjۋ`ڔw{kqoƚý&Omwm{no;osuǹo"@I`Bs?۫/x! $BM MQF*:}}ZXAG:E$ OFf0H1d2] iEgAHkt^Ro%AWv`Ug{:C58֐Y7J7Ų;y$-{w6EL.igUU']Զ-EFkLݐ/_k0ϙ-X|}%zہ'U(}i&],Gm (-!W ?Eu\a [1{`f%tJuBPS3+l @lITDr }@˾# +:L l^f*ɱ0e*R>YkY,g^z{G2:` huRgTA9&dJɧIH5' [b%^)ig-a{-Xs|VEm"@}`Ë6El3>78aFhk@.F?㏋LuYOڊkKNk\EJLA!jv6E4:e @#eobG%uPݧOaG]6fyT;ӷ$ḏhiD}X` IjI6dY @r6HDiu[;0hc&, UXCY$n45q-( &uD9T*EAS0#^DnξVD: IJ$vY4a#@)Oc.*5X@ i}v F4ljDp8\vb˶N$id;r aZ0Kjr&Wʲ8g)fEizquwF;a!P4 %!+XEQ4@@4(h ]4M#QPmw.PW!ceI~]׷]+ r+iEB&ISTqd0Ρu9:ZosTtfx8WZrּۖIϋƻ;ccy>>[y=Wb {֝踶?^!Y'b֜MtGFBo_m^~$' -5^7?n.as.S.>f\VUUni\l@w5g_qzhex~Z5WCcXa~Ekj1W/}f$s͋u UCmHy2NTx?8S İhYSDz=][j;-S>fk֋ {FP;j," cqt mX ŕJD;CR1$DaD(joF= F4vCP q`9-RLp8! È"g#Ç"+qc8(8P!7陸lb>wO]Z"F(ђjd;5>2u*AIc+hAc4:"5!aM2؄ A2 OQc(w&A@tGg-Jki{*tSpjeyY@(" ;(>"Q8& 8L Y07$f;Vq$\@KeV8e*"8#EDM7{ ;[Tʍ#-onjq n@31F=F}`J*b(48 E%n,t  ̕k%>V$-YH .osĞ_X1˛!| \;շ}GkogJ˱N%q|k6[Ìr RycoXnFȑ;]@V@o.pbOW&[DIyiQ/އxH%}M+$b-/nP'}Gjh({ Bֹo϶5%vZ|gq|xs5͞T3ld͊cƠAk:V ~;a4<A^u3؆rouJ۞34(cf!AikƦ"~ZEY:z|%T8 yjSDc$eQ"!%6r$#mM{Yges? = =ߤ6V].T`ˤF3,~/{=cy:\Ӭ{m3V[a(԰ڸvUg]mYylmK<]7z|PO.rk&{zP?|{2`E#ԁXvq;Xm2Ǒg:[Au#T*5,^%{`;6~?{LQ)EMjdt=8(.w.DDI/}TWnvgP)=cޖ '-F_`4{aOmIN [c# LF7{R_eo5MY;#ݍņLmZ^bSϪNi )iujP%ϨO"X)\ӮIT.پ)#KE Z𤠽IAI Ǫ2s|ODHXy<3u:?ehX)Ql=N;[B7zfNazoLFKi)EȏW˸WW wV9gH*\ZL %GS==q G s7J;&?f+s[gUdb'0x,=%F QӔܙQnnzKFlt?aZ:ݚ|qa#H-M#UGGb|HwMÐaofy.at_Oi-~?>X9LMNQ >lM6$@Ua_f<}z?qk8_rhsZ0#[:'~:37vï~_~N?t?raw~wV`Z]4 $0Rd追mikCxb{ -d|60ߟb(ˠp)q*\z~iqkV>\AfޮdUQJ쪋SOjqN;_!|Lb~4 *^ M6៓'9X/[CWuy> tNa8:Ot=]&"D&1SC{NjFCߏ Wo[C9ʼ5G")E5ҀHt)J)\k ׶do]v<*8(T0( BRdB$(g'<#zhA *J{T+N%N'^8YSu/BB{mÿEw0+YM:YmXZi  Yꀽ:/ "6܏±qX#]j)Y5Ka'k6ܧKUׇqBY 9\rgRU.>LǦ$@βS6uFNup<*lqt";F`#Y-! GMA *Hy_ʽWR/X,9.tU@Y`Y+ô\ ZF |ckYA~TT7txw Lst@Εp{05}6AKf-a!-z=2@8c:L(bB1Cp؁B]y! !] IR%Sk|Q:Jw]`[/2TV^w:F⌊<$8"%~EyHT[,(l]#!~`nFp# \1Z#k炁˧03k/9v3|%gS4aްμ7b<;Vgg2S}ɲb"P `k U cJ#68R$#8 WiFTNTt3ºφ#~ۂaN X-Esfͭ$G~Ze=x@Yu'RbBtT97da2ɬAaB480Ue)!:}al㔶N.Uyemk2NahvK}N#hD0> lj`AȕgcNJRG7fXb[րoAG^ZCObșb$W\3%zLzLxxd'o/CM. KF,ieO1IϳrȝqF,BTՃH# be}0zP@DD b FрG!eLD͛W=74H5..M[f{6^QD.2YM7=iL@DzdaD@[ό/K 3fϘc5ͺ<1cLLr/^L [˫tQϺRv703lmK<]7z| lڟr]:LIo@-qR-cHJGRXLÝ7 VmX4u >][(eTCQ6Ǜl'5m3G&ɍޕu$ۿo3 宮@>$NfL0`>1^m!/Im֥+ ZHvuw-R5Ou*Z}z>=jqr3]=8^?A|u8^58<ֺ3-T:S+Z:O6v ((uqx8hd/.;uowq%;PRժJ^ r-*)zUQnY-?F<s<,&6M)|'Ӎ;=qI5!lz_ɰ.1~;׻At3H?QI~w{yO;?wwNCeyx=v>ϣ[;ZԡٛUJL| QlJ HS<m0ZbMsiGd9-빾\*q͌9Ą^ah맆o^QG^'gZeF ^*qA /vJz^(fO󔪚j _M kwA=`"%lcE=ƶFH۝oUM1er f6_ \{VT])ziN%IZn#¶&"CTqfv*6 FT%;OQT(Y[К%Xabc `ӥеCv5C':h <9| >ިd"־e-{T kD)_Ee%R%0)G<~4_9Q"o?ph!_ȶFkLEpRc)j6P%)gݚ iZC˦dsxa]smM_\tz2Rω~WZ~~= } J/y2ʺn>;ɯg)=g[|rC5ibV{2CR=YzP@k_GD/L)MTJV{/'a:xw|xW|{?ZrHR^U:"WT_ K/ɪ(ZVXxk}?{'2|bgo ^ӳKs2F >&k3g;$-+}.i}Ku\oO^^xv},LVSaE;j9"lqɰU\A]|2֒і\X3zoF9,?ÊK] ׏=-ޯM <ڪV[]wrՑ@ްMcYwWOhfG ,}hLZ>l|{_:~ݳo_|/ċg<{_# #pUޘb]2(GMO7o'kf/7hں){ՊH?|Ԯ3M~>1?׋q'r߯ ;?},j.j  3?\&?F^>boL|Xs|Gð5)?&<;A F~t+tx-EqڌTw@>cMv}T0x.dLQ޳.迌.Կǯ~<:z $־lBdb*:릒ӵf6!hӘ-W_O2?lZHĂȲ6pbIWlp(LI-\bvPl@mMt7Hzޝ*uj`m>fupwj³VróVpUʀ2>zM lkN\+[vz85An%%tj-jȮ&ȝɏvR'=$wzJ Ċ"P:=۬/ں+J4QEǒeBIJ%3(XX2VUZk3qIES(\}kyWrB zVhs?lHu2P onLdOJ)9$Պ3&h^E$BRaӭ5.^"k8︕[';ky$ fyU=8!:w@Wu1;:nlvm[Ʀ~97Z8}1r~5Q ޶INw nuÂ1 OTm-UBT~LrbSP阩Mn=35ZxL4wz:Y]RӮ3+Gfm|d=hD6RĖ5H6Sd['V>?oޯ>xOaVwIu~=Ws'˟VN/[+)+Ruq$ EeX9w\yL|eUBr!U,-6.㠰n鄋?lmF' j\)-!)N)5 ^)Z0KJd7T)skbK1p &YC9*r>:<ߏRk4[׌"RNnBd2LuL%UlPX t:50fLclƆS֭D'$Y>bTgk&(ѷN-#]VWq軅01hs4E3IhQSJ S`75"8} 2Kk޵,lMV~|m]=W\B8bwu7b rG}ZO^\/vYyc& :В Z%C$d[{jmUer&VcmS$G*m8xլkRԊw"\7jX2 ~Jэ@݉+K5}##kҲcPK]0S ;E&+=%4.fΙ⢧1Uس}Yj+!%gE2*tL:i訠(xs"D q;T!CLxe@cz#6<ŠK- OdB lUZSlg#onR5@%b&[P1'0<1&%Ba[Mq9 C ޠ#. ,8ѭ`!3Bwn&c%RtiUد=&H-GC˥wS@yE n\X@dRaAZ=%E@iA2ӊ +}p,>i #]VhZ?{*ȋEK:zsx , xi`unR$ 8U;h?`%g/Y$ S`|rW?;8ɷ%`}}6Fx0=\L^Ls@ wK}|H-q3 :Bg HaZӤQ$\`8G:v,P d!:xiɮ$1a6̤DK& JK%1[A|?S6mxSO}$P7CL3%x3Fs jcJ`!ئ[gGc9O 3}Ovpq"$1^%z4$Mr"=Շ(ȦDˆ9lV/U]]W¬CeauVglar; [Vv|4 6g w0ِ7fԔ3@)!vdU謏C</wnXնu.!ּR.\'2^d\ODVn %XUw= l$f|0rg4u3iC0R +㕂 5NU>_ 䘙| +o2J BDr 0,0"m00p(`1 `ZH _dJʽiO=`a;`hgQ2,Kƞ FG=KQ=%8% Ca&s < jiTU׬g[Dl;zr& 4KnIMXm`[CA;XJ+;҅*[LJ'yWI sqp8]Aȥ,Eow @x@rV24A&وR] 7s,Q]n/?)#0r-J. *qH ?GE9G$U"$3@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H+F@l>g/A^jҨ  ˞w2<<(Af6,l>-oU^K;+A:|D!Z 1aiT =b\Y=㸲{c0 iQ<(y|ҾK㸶h^yɽ>+oٞhn s絏cڟ_$`\j1U0ǟ7i:Y&*x\ ;ȍ Kw6uQށ'6/Gق,{GyGzGfo=Y-Y[r,07Cp &"OpA0BCU\z0x1]R^25hfd4#F*Feycqvh-ࣦre_u㇦4se :yRl@{C̼Qx[{/66FF~)E]$;i +כ?RAe ' OS̭;W`ږ;O,s'+w$ py. "5')/7T5;+Zj춌*Y`h?Nj?M? ,a[Af'0t셦 |.0@,@θqVW:٨3D/̄Mª\nP65☵Z|O~D RiT ['%'  %f>p>fY($\ {v::X Lױ:qvJMh ;R,Y{65ˁhMΨ"G0 ;Oۗ^z߮ϖ!;yQC ;ఄ͓@}:I \,l˂_ 4eAJƐXe5",",",",",",",",",",",",",",",",",",",",",",",^/,,g*/ }9H ڋA A#@JN t H7<.07.Vo>נ CP"XsQ{DMf# 6Ob]!L;!FZ1"$+C q, ^LZYSr >'R+*R+VgO^1rX٪슇 h2y6snNVi 5yz7{g6_F~z:jtj9ْ6L RP^ A%b5ҫ+jp5qvkF?;p8$.VNO ?sYͼǩ&1Ō;@JW:[瘰<4@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @嘰4uI 41iOMQc–9&,#6H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@"$z.z~aPRMWj8$?~dwk  L@`] pH+TKWXEg7]gų5}lq\ `B#sg$⸧,KyJru>c[Yw䃪f#) JK/>H樉As R5RukaV8Y 8ka.F~xX1gTڮE~F l!u˜?nb_?QBˣeԍMj"gVsK/Gj.fAroȊb0KiRX46}U]{\QWg.9irM)U]B( apT2fA?LbfT' |M6V:/)֫va4A8/SWcPNHFY1[ 8<"ϸG;#0Z׮9+wy[ h6kLqkY>O7qߖ3g|9jfG^mn BRb`%fٻ>6m6ߢeh杙YYJ#xtSǻ"(8"KPw~M>G#yqN>׫wz_QJSO? .tITFb/m2+yQ 3\c4˙-fp}#o)$'.Gͬg3evl4xXh1d &d֍ᢏfF߂lNs CK8i ɸ&Cg/ `[Jo;^{By-J6_m.rsBb̲3QCs4۳.6JF%7U8xqPM݇/NXxͧȢPI}pN_>WԼ8@o&)<)syt`ǰE6nEPH.ŌuQmh\g@~NP-rtss9@Ău>wgG1C8nN Ikz6ާ^Kv]^1ŽKoz߆An |wkgsaq0:XO[n؞Slv35O]},|Xߍ ߏQk7br5Fz٢Fĸ_+)r~)@")Rj@+pIO?-+DT5w716EEBPg0; TH rnPFDde`SJ exaT$k 1a"I1$fy$zBgb$XE**^Tg|͙m>ۂ`0?C=l?/ f<,fmsOrKތ)Lm,||޿hڶ 9g{ U0|Lmj3r%nti1KH}Io7vv3{{{0ɎFl޹6o6f3_7mx-}=--nϋ~>kҭ=Ww4݃f͟q[RKsūonvqk^h~ Ϻ'qWa;N֡,Fnx6G#/^Qb,8ZQb*p:{5š Nߡ ="|4=~~uBvLB'B'B'Bk0        ʛV Z`EoƵsg4_uHGM#˩FH)$,K1j#~cNaх~x{[ b[cB3=jyX.>ޤk?rKSA &"eNe傩LU ቚHT ƹLo eW:ޕu9Bg| 8.0 hxo[>ؿfԝTvo|uw?NkYe77g7TUQId= ,Qk$gJ:ZTg 7eGYWMIO/ZՂ7vvKTN?S/|i Pm܏pz٭A3Q1Q1N1"kycBxe tbhӍcI{vVKm-z}!+5l\Kϛrd֎BfYcqme #:9L[f"̷+2O߮v.h2W{4^[\u ^9n{L.wsl3x̣[4SUZ٬5#Csf<(ߍ5)o gWOӟłAK.ԓS"x"AI"*eQ$e* %:ycqKL4<:Li6_em<ߓMϒ>Z|}~[{2UtVurZP쟧^cۤ+JO8ݱYT,eً3ϲge//lϪN͎#4Q 6P~TFg}N('ϕj쓍sXcipVb=+ p ]񭽽0|ԁq8:1课ʴ  E\K/bEs/ \d _aakkn#7o9y 4nڇɦjS*T ,EiIimק1$uFe$6Tez0F@nwUEkv|R]$;T"wjs2hn]%ej\#Cnd #R@Sk B  (7ny5HtHʢhk_LvY')DιTdIkPe0ټB RG|;|uQP +}.qKnBmmtg=dr=N˒G3.Q09b$E%-Esתs3xG OX9^rab*?g虇,x#\!(qVbՃR.k0xPH9Ɇbᨎ01)f:C)B*:ؕvE-ٍ&=|PGvdGDn|XՂ@oo*zj O\-ݥsҊP[@EBƎP$k`K@'eP0*m4V̚A%Ưu>Х٢z7 ڝR将jUZ'm_,3xX dX4NP}T w-NΆߚm+}+I; ^¢P$H#E~1ԬT ,h7_tE.)h2 TY"!|V@1þ $?]MU䤍)KvWIHד˟奺<놯 zŽwxU~kjp`q2ʺד|rN&5nZ(^|Ls*u~mvu1H V>`yU] Y׾x@]xD1Ja8=?X<:VH\GS딿x9v>hҍ&Ѿ\{Փu {镑}zUk;NC 0U5T*$"DFbV>{90XbsnPl@QhcD=#Ӷ 9wxoubkrY`{g %ѸJ'W)k`Հ(G ͳŒi#9FlP삧1c9d@;G;D;AыR,Xcр6dB(H@yEp2z'e8 Jj B$Y]"z}k]ĢSK|G $ݺ|4DDGgS>j_ Ӿ_t?1Ґd(q;Ew/D4unL9w|p\I~2's6 POT?"_tc<0 y=g;-^&ul1z0H'\trDP1Ċ,Dξ2(Bu46J3Fͽ5pwo26fQQ E4 =rWNgDPT 2 ^֠\0')~dDYoTZ94.GgyOPyWsl6NJt~c D-ݢn⭷XS_AfrNp%F-$3ٗt7@Gdzn~4wcp`={b0x$01$%ŕQQ՝AΪB9|-e-i"{]V8Og[HZ!)_W̏<'dy&߮e܋zwٯ㏳*m$G"IxO>0a>੻kyθOuﮝ9JbH_[ ުۮv᛫%WOYvvުzH[dB+u  [xjVoaRЍѯx\#n|o7Fȷv#nۍ|or d4:^))ͻ _ G>v:!22;EeǛa`w{`׶G8ےen cG TB" %]l$vc.d+N'a !cgv^YM!kz`(J&rNhHܝQ84-%78ȃ-jn0_1;\++UO%,扆6V7Gu7Q':{u# Jz"\ 3RʪlѨAȾ㨺"LLJ;FH)Bo Ut>a .4LEz(ً#h3Kd4qM !E3 G2h3gGAlLmQxH!mag2B@6Cz_y=b&ں,E׻‡D,tPh|^[nu,< bO,d2`eʙY-A P\+\*6!SkWٯZ׾,xbP[r P~X`noRKtVqףFo0: x謋ě:"&[OOM"E&@,@Y&3f"@IJvV\0eNq)a!P! yI@"Y)E.0gl&cSŽ%f x }%sȫ§/a 3~of aZlN?c? dqrl`nK`KlQ> gl-תd?N욮Zx+b }UdJqk!u*π bO]uŤusrM6祃m^ͪChY@YͻmZ'=/|ziAK-ChU栧Ŝ7!7_K.]3zOĐV9Г )cI">;VZ}q$[bEf/4nP n./@\@ɛKhi> 21b8/r'b״=rRNs}}\rb5_~WhLnwV{U0;xT& =W$mm۪ZMᲦ*!ۢyY4O!? ΡG XuBIщ`N&):hV"D%&9)&ESN1gU'XrcfR|وkM%G@^r.`9VG6鴗;kH)EВ)E 0ó1mLRpy+-L$ìd0++,㪬X%N WFf8&!V[ ` QS-¬0:Y )p{20+f%ìtf27Z=esq  ujmQje170)<s TXJ:˄}zws܊zp@y^'dף8g+ު9=ʩW* kiL)vB{9.P)UXR6HcQ,JdkթG:RbN IX8FҎFYp-^jFԷFΆ=CTn/+@V]Igo X]ϒkˣ(yt&#S w80jcd&Po8ր xA$0^7 7+Ê@?,ب:DJNK[H@$3,*͸Jb,#Fj XmVab6I9c3&!L+A4j=%H2T`!6XxɁ/&ol~躵76a$<1bCg>r+VtT_VstD0(*Dn٥Qwf2\Y9x}~6 ]8@{!B 9>UWVOi88|RsݚJT-z鉼WIEsΐ4?s!g?gJHE|j+PaiaF&?iں7inDeuT'?Wp7n0{J91k˫RXۥCfݸ {we[ZFv$ƟnR?,o> o*+V1i~bu49MkG%y$F+XJew 3Is>5M%qupfr1V:iT礳pqo`;_?|L??_~ o>}8 )v&'[dдaT YO|6\3/Ot+g:b J?~п.1U,.BkNmB6Cz{ zr//AeO*Ur]J(1]Ĕs9Ro0T]!Pw&czIFhpgCf qctN0I ѷ.:Y"1SC{Nj]AȔW (_C4syk'Dd% S!\k#RkS*rQ#6(;t>b.v[p pd`QnSؤȄ`IPOxFT{тU>} V.:U=e6ڭ߆L7L:ņ>cУrF^DZN}x&o_wK3~y"Ѣ\a|+ԾV(,7%q aeB!Ds) Q.B|j H"K)q``f-e ǫc,B ?]2m03<ŘtAUo|W{7-o~2><߼7coDqN@r%/ET|3@'A`7@}݌'M=(>a^`\`K* sdz;[sO(:,ɄW'r%>D]NTjë_ax5ek.Kk.KOQ0&Bͤ]oiQC~ՂaN X-E f-@~Ze:PPwKf< <IE`3 & If 2ʅK)\*:tYJLRħiGRmfh[C3XYB+lq~EߧK-=KtI SKǒDn=&yp ١Κ;-B{qyb[#k\q;z_ٓ7ӗF|oh%SqK,Kxd!R&R/5eDDL4h#pT16{^x5s?%O7mF[D?T,ߎ"fwadY7i*#_7Ɨ9̌xYg#3Vf3Ec1W^~jګ T:^R6v=0 lI۶!OO=>(ʄD;RKRng ;ѥ2|?g ;a3|ϰv>g ;a3|ϰv>;tv>g ;a3| mJ>`d 3_IX!"lf`BP( 4159be1e1Nd1Y %p, c8HTk"V+34(6Mev@B$tv`OOCH͇NAr a[/< _C9ۂ"g4`[($TUd!0" l|+++ΫLĔ{i>/k+$(hJGE5$0ɸ B l&3Nq.vJ#nVtH̅a}ECn%ۭ^ļ<] *`La$ !"$RiŴ+yưmy[aީ%!c:ꎳr3|`=#+v۷ٿ9" UWHө'S]w*l TJ(my3*|V1YBqXt-X0taE$XLjAs]e+5$oJ/$P)"wmmHyy) 0 L`"%?#ɖ/ئuTUb}-BWy4e1Bt#d:,O1NL!\2KwpE\ǐE7K?}~SU}&gsW:K`?j0jYJ'uio|6&Ȫv MVK~הT?N,-5ZOV}-'zgťrym7;99/oRr)i2mJ>bYJetVym5hnU?ԾOJS ˔L\Z**JbId6̒m+ פن2JYSLNx#Ia*'T[EEIt;-lPjSo5O Ű hmKkrod%w6xԷ,*03.XL$!p N92]Ofd1˘V2j49=+1fJO"*5qe2R-c5qv[^Qla\[XnD~Ӄ{_o-O7w5̮ga>mŀ΍Ǘƣ-8 '7%p }\ L*͑EX..xFa微Ŗ(34JlJm&Sf69̂ڵSm$CVslU̓aCiA&Cf &H&.B|*@nɯqF5ׁ9cBI6kC"BIs#@a# HmH4Mk4퐚vvAnz=Ͻګ?X${V:XI[eNI) l_VM©ѠaГ2S\0Oعq4Ib{GG+xM?Ocwr8k: rϴwA hH!DcV{;rL Y-1J0/=0WxaLR%6s HƧ4Z2 ^F4T%GX Bg.Dr(t`\frv8{Ѧd/QWRl&`l\s2F R]|z= ,G6 `FpS`93`4*svU58XG&8;2%U'399n*hHr`Wg?gzdLK<\UEi#9eم& X)2hsي]VnhUֆ)j{v= 1u̹PzD @F!hYMh8ɸLD^^Htl@:6Mko$x~x[7? j'9 JkS6tJ]s@iJ` ]5(@AZa'. -L KiaKdvu'a!f͉sýs2wfd@O[5(cQG<7;6cL-qv{V ҋַܥ>%3} ToP%>nO}4~\u' ͱyGG򎌰 NjL@T iz@Z7@T`őЎ@;M]mt''1oۜ$ᅩ_s2qћ1;>idG#W||_v>_ q嵑C|ջAOϚߋ?z./rD\-}qd_vf ?Zl-6ׇ͍f=i$9#'m&M>MZI>IJ I[ݒ}HS'dhN\* >tsU4WBKX2sg37k|Y_&=M8s$ෳIڈ8Yv~?߿W=4onF[w,AIJf!4槳٧:һ_.T'*.r?TNPZuC 0ym*\›/[F}.kҠӫq~EmXV+v݋e=>M ym4%сdFAS pf pB ,LDSx*M#D;yDܜ*9/&q-?sUUlHiT3Wo\e2WE`'c bbv$7h oEJE g?įuD(g)Ŭ:0 )jlMջ5 $2e U@ޓNC`R `TxmVDa(ɳ1st6/FZPcgTyGΙgsI|/oPlSWz'ៗW7aeqe-+7_G~z;zK%:a:Yӯ=x7>ތ"DʤN5"+xdSY6( c6赖Qh;(e*I)W[Ќ}}o7F{h}o7F{h}ooFPقD<$j$QD-FR5nߖD}CIT SKG#wm]:Z֥uh]:Z-#hgh}o7Od Do7F{h}o7F{hrc]͋:7yIBɽ`5w^tRU<$ CBQý)BȥQ 4 tKq "XT*Ņi],+KBf:C*C+6LfQ9(&0eoiw3ZW{x[|WT?M>Gr}X?-MK}UF#+YO<R ¸_^\Ի]M/_j%t1M|gJAoxCe_$#@xrAPR}L.h3~ɡL#@>Գ)ry= YXbYVg<2% v5nէ94+`frYS|?ͽ<ߟ~hYeMo Ro,uL,X߀rB*I% ޔ )}ϜoNruF~XG\E#vg7KTy/rSg+Q|]zM3̦M1"릡ȊՔ╕b+5։~-ݘ6,}wNJw -<\! n[^zx zc wmVnHVxtσq8D0o 2 >S;pwGWNW j:T&h/x"71qb%*eX&ˢ=%K>Mp]eU} =.KKLyPFX%0}CeGa@T@0* $2Y^%u\'tu9k64C`du*:#8PѓJ̕jFhCpj匭G mg:t!aN]fA_4X4t=4z**}*I8FOoѓ}VFJiO5=hwY$сdFAS p?FWZJlNAi} Z߂} ֨^;lJ[xHP;Ti҄{LE J .Clxa;AtQe[C[2BHcAsYxY9" L]J?4:d;zHiUT+N|nۯe}^|4H.Bڭt#9rw/u61 RgZȑ"ܗ],203f mx#KIs[ˏdm6GnEvՏUb*V T'Vy*M޹T@v[{}Ϫj 9Ze-5rvL |}uW(9 -T\s nbCrk{S7 FUadk2X0hf 6gtID]"^H%I @I%R|m 숃fo˳a{[d}MKx bhbjE6Gn:M~xu4r\`VĬRMj 0"7^DxtpP e ;=E].ۿ34!F&KZ'>&ը26ƘO#Y6S}u.1V)VĜg<<;4[avI4G\ꐌh54$6Λ:')i4fbYǣr}6 Ί^9rM64@3{ұ$xJ q:#;V *≿;o'b?ӻ˯?}|G?7f`FQ Z==` Ƿ޵hkoٵ0CJ9ńo3˝~ژ= 5 !> ~f8j)N׫M*[Ani^|&1?GsMiɞ7QSTћpk1CوpYa72)sE'=$ub}AxBm};cXy;u KEN&=E,Y"#dЖ訟駟?utb<(cKkB$sH`&&dzXoнgdU-wH)*2dB@!}C҂ Ix#kR5?GZi:i AX2'4AsV^z[e.ȽN6H62luHpl%ZZHG Ia*,$mĘHsYDd()),Y"M6r7pg8bD}}>*,rps1m6wX{UoORR"Xq8=ܵSղ{ V{^a8jۺez~9*uyL+/Eǁx,@dY!36Vk5 s?{F<-w^D2&H=+ea0ܺx h:>xcGe bѠSr))a)E | { %cіj 9YJ{ڨӳ}OGQ[F*kFEJYd^y,*'}XKKxcS,Ite2my wEBmH2A%O19 $$7 )A ךjkltZljBKhuMve[5>9:3䪎tjۙEos QYmQc[+H-5pKnhvx;kk2 ;'Ӊ mɸ2D]{'"]}š蓉 bYȨ1fYeT,P蘭ѥhO%D Fَ`akX,l9y^*zUwvY6 o[/KW0N'@g/_.;ԔHfApo \ !n+xoi[=P:( K3{&*YЦmmx׌ˮvkXԖ-Q`7q RV,1C:s>._VGo].G|b>Q杶णBgu0aULyGFeQj&/0c3;Q$13_ kÉB0KPЃI+}JN\9r/RXq2ceR@2pC Ld*0\fe?Pk5!.&OJ-Z/&g_Dv=jO=K}3t+w94f潴ڙ__Γ6A' s9Ӧ( :gh15"+'cp9gת 2{UD(k8ggO @keP‘PFv Wx2rSս&~bna6e'Rْgx)^L~ͷ`W,)_LɉfQqMcoW>%gI6דG־d\!#"@lRYDH)+I/ZU@c*Ĉ:e!rf-nFDJ:NAbA\hdklW'n!0rD7e˨VV|mݳ{u&E*!qCBRic&m68zw܁ZWEn_t6TRrɄ+댩|2 %G)O 41)DĉF3@޵q$2R?_EɮKrA~5~J\SBRW=CR!)IQ0EΫ{U]4gE;H~fr6#UIi׵̝o,k}8ds}"/}0=:2d)GIgˉFr)y1 I3ztw9h(iɍ"\g!$HPMǗ: J sUbUIPyj{ASQ^~k30~דZ#]8kκ1ueuwJOڲv3iBwo_NdTv BdyY*IO@=@}zoFqe`JQ 1yuYR.e 빱1$8 xZĜS)OK #gQ|U Q>]Zj3r5V MKrd3A>åE} *z\PN{|``)VUZ(D bIy|9=.i9ȃmMn\zj0:;歱>_lv`ig<= 7sid]%+ AJ@LHڅhp`2QfVHu jk16yծH9tyl>}}>uk׭w?FHs,H06P,^E7nJӭ:g娹\;L-ctxy"s"rh)? Ӂ!SVzM+"gV%庄b;-'d84H\1}k9P EWQ:O :/7(F& SZk"Ee"+VbwS5eXkD{1ElFIK !&CMJ`#@3:.@XșO.zt#PW%ҡddJ <(T[ST*6VP"#DErAP~l!ؤJϬ[CtBK5ܸ8# '_V_ǟvYW<Њ LZ$ @C5VJ1R"v>G@薠cmbֶ<buI/>=N{n-K9 N$?>H&ǭd=fy'yo/{:9|觋/9886\61rĨ"BX8E2۾&pD5*0EŅKrt$D$^d(L5;7',J8j=!zEbc OgiN i&R~2~CSm;{"F^-99UT ?xNq䷺ ?N8?Rf8~9.q|zr[}]k.&FY~mμO|{5=x cddA̅A[nnvH?]M,op18|&Qݽ-fX{3Z6J}ӼX޼ia*ـon^-p]ofzb >; 'M5߹w^7nVf, ; nnA/4 .#jJ9/B-B4{7b& 9 m6NC< Y/GO)y@꾊לE:F)(o~h]9}?mt1M" \xe؟[ c;9q4+R?)} 19g]`%1pNc&瑰#KFd5X)tsUCxB$*\.\"81:wv`.9TN*[DUdb͉G*P(@mwtJ=z_+JW I cHO9PBj*PhRn4>9z@a E-b I"@SJ{ʍqj&Q 1)Ƅ%)@ !YYfR:.t>{Di}|t &3DR=Z}1 J+מ)|JKrr.]j(\HI%8Ù4Ҥ"(.J#i.{H!(ǣVvB,MtL7_M؇/l:u+Q1GFƔVh a>3g-.O2kʓ>Y)4i꘷!8f8[,_`_0\mv|7Ϥp }4i-kqo ~>IoxU~D#K^wW ~6FG8?brr5O!wOVR;}q|QS& Gr+==š9 U̫F=zLuL.ۈ+8tvmfGw3ȋafOX+A-f_'Cݯ^ٙ_Wgἁ]֧x8i 6{fIjzkW=٠z8{ףԬH '=Zqn}͛CU3J y: YHR gsSQwveSJ fIub=&@9-Ffg_q 㼮qۻ>ȤbWQNupKk_8'1X~R]e2-{wF}?pٱ2:QjaHTd\TBVW 9[.2:!dZs$!qPDS⫐cƠs E[J؄Bd,Ffɸ<,,63+ Gid !/JNū9Y27f\Ot0iOK3Yb=%TxS<7C9sp!&⽍243A+'afs,6T^+3P#$&Dҥ#JYb0,)"͎JmQXjNjwv݌$Y\IZF)όLZD1! 5S$J0 wZPȅZ4PքHdi/8ƜڐXģR3,k~Hb󣓈D|)B4Hdޒ.Q:fܦ s;c]=خAH`6hh}씎r<D! Mŗr1Zes2`)!a D8.D Of$lSL-z,cYj\#JNu4A2%=R+j{zL%']z$+rD ~<*+̱L6mWWfԕp F4=zF<0zD%]J?A]N]m48"uH;uXUV򶫫L۩ LG\JF]ejnTN]Hu%jIםjً78T؍ӊ'8Q󆊎" & ={>'=?^T_"B|= ,޸] u|[o>K>}1ػWGf rQs˖[_2A +NBikͬva^8K0ıjE w,{klʝe)9U, Rl p'6", Te`mw\۳/?" GcdrX4Co=JAMgѼ@${]:u5XReRiV8:urԕtX \JDž|_܎8rYEϊ7+U\t Jg^|_g-)? )!ۏ[JOKQәBԚɑJX_րo,<|yfʟo_ݪb2׳IyWMׯvMZr02܋S@dw3WIg#5Q*1Wc6_>U{߿Nl]t^2 Qis94(Wdb^{5a`)ZX T$\{7*W  #V8%j gyhJց vB.yp [Sx8]]ьǯf^bBw!\8R1.u[9; BPܖNP Aɼwl+Ԓ4Ei(h+Pd(5EB$IJXӜ$@q DLDlzg=~sUmmgM7x3A+O燜I3,Ǽ oT'xY"eg.25ß?xҝmpLy}&r.A2KV[c]DO?޸m^WFaGӚI;QU!,U } ЊgAqIz9-[mR6˖X[Rf9{u2 Hznјӂ&oD墣JjN(jHf3ƑdJy 2.%)Ĩ5t:A h"g|qYz5͊mMWәǑ\퇺r7wMo]1 0Ό8lH@p︖,d(u' E1K)"B31RHpՀ;m<8-\D"C Y!c>i [z B I!pR@Бiт#a#W8(*39V "}$V(I xb"XK"J-ZD)뢨5oiw?#k>TP)WȘ2 Hfrdu^YE|HL(iRTRe99k#6W͡  h⺨o$с>ipTȬӮaLcgW*['?^QQ|;.cG7Jկitiݑn:d 2ZR6n-asorQ\ kgſdŷޯeR ƵBm[ R58 ToV ~;8C3E\r?T:Ţ?{@(>_A1wNq+-߽딪[18~. ˴0=6obloɶQ~\5X#hkeo:R\c6 Ǔm 5K^L-T:K !5c0MIF6l)/An(l` YZw3}c>J; @AjIfqi8/qTB .P,x#R%tֱ$'LvPz_zzo>b\{y1C*$x 2;KdiQ<Ņsa=76p$2AB,<#:j>1cJ*k81r?~el#g^mbaM<b D(M>>Qj>$lT=/4s]{)@_Z/+9˫M%f_M.B1pc.Nן\)InӐsFԓԦas=NLsǸFAIx-v4RSL;u` yj6F&H^nSm-nM.`le<C>M 0:Gx6WHAm3͖ b:z v+)r^!y:0b@Vi\$̪\PY #Fmte|.w3g |ZH(tyIS/MAP##6EZg5$E@W >B6b5rE{J{2ElFIK &CMJ`#@:.@X(;\*Ի2MP22Of*)I@{S\B3hQ"KM+{,aTU֘fOdz_VN?7>1FJF\/A/srnWD`8zRe~sakD-u= =]uѢUfyR`X'Ⱥ^kƊPuO3\FrÑMb(ꔞb8=_wΦ^F%zDž]] ;_}xǟ~_~o}.>~w LPgpRG;]D@z]ƺ]s t-UM2›rMW%Wl/J8_vA79ffW9=(۳ (41n&*q*\R!1 TB>u>$)Ԭk]JB5gaJ%yuF2:΁O:H"-xHĈ@+þuԏH :bsκ:B&)5BRKOh@IuhV\ p`  %J,H*$M)1)73G3 nJ+L1&- ݦ ;\ ͆Ϛ52QwLpETde mAAL%g+rykQN|4k|$GJM.!Z^EhD..]}{+wm\xr!,'޵#ۿ"F6o0M3d {%X-gCլfWV:F V_1BI(TVI%Rzad[xh]ł@˺+p]ݦon9WS\Dn%0ep'][YVƤјx{%`i5Ύ9W/y`k0X_{͋mno*1͡kE}ҙǭkY%9?aORykv(A^yF.#/u{Q=ho~8tdl`gil>M[=/^Cڼo6(ӏf&Ҫqp1OV@M/?1.֦ciS\N4aKC9J¼(0Dy(#sàfxd^K_\ISRDܠ8BGz44YB2D;@\ ɦs.dcpײJ^WܖYZl^/3q45JATE >`F#z&h0`zK׫o?FF㏣,"myϝe)Ϣ^uc._7#nԸeQ[tY\AvWZUY.j(zL&2HzgS:}7u-iCҪt~9QQ#Pҋ:PeVk{sNFK~ɹuuϵl3i;mYV-{}tAZen b DHч@ʒRrS '9x'SAu hN/>q)zbF,dAF 1qNXŚA1ÜM]8^*C&.Vw[B#DpHa6V^&&zwez8!'OŮ@'ϝ-e4\4;W_Df::oS2]v8*O 8|t{ Qe ^y+D1D)M &%YUdrLkkS z0&exl)h0i(3Tmd&zdKy,ʻ1%p+ټJwd'958;vw~{П?<>##631#e@nJI@`P,K=F-w@UAŠ,JY&j, Ld uK1hlQYa8nvXU\_v~>{^s,cy2dl4JpQ.KN-Vzo#S`*)Y٘9lND JEf` *jɨrdݫނo;dzE:l/xW@y{LFqEY>Qd)@pQ=y'>L`(aȚ:&gc6,Pe=HFYdFz{mtO}lgĶ:fZVv?gt!jFNu~W# pBz ٰ eUKDN+f7JD@Mub.ENǏ"y3 Bil)Z2hm;lH+XxwXמX9O9*5 RɘE" 蕆譳U]DS^D[ڮ>l3xIn6@%.A2 e^k] "QeR]CzVZZW pw;x8+VLR,L j)pPzZ~5=A)݄ʇD`{x?7?,UFg/,s)A4g Sa~rY σ$ôQ@LL14P1M1~:ųpz?.-U9>2FDf,vk%x */=j҉7[-{R?߿|~8~o_~K#fYD0zc~D`v>k/~ӢZ޲ia=֦S/:|}URӛ`$ƀyӷ4}T,f,8](Ѹ Mf~֎>ErS*zS8R!\b+vLNxt _V⾈7]/{_% dIׂ'99ܴrq~W2E"xX̩Ĭ*9Q??&;w@ǜBCg GLB0ƹr,2#2+Zkl̎ γ>{왑%>Smw&FY%HQ޲1y@6$! r0XlR,@Mq6?ߕ=t{r͟TlP;}pwՍ.IW}ZIT\4B\! @4$w - C#D RS]{Gvԁةc!y8dR ys"p.0oyp%&@g(5&M|a/pJŹ,Agxh1sa(\b8DB5J#oN cM֧?B?uKѻOb*ˀmK9YIi;E?oTMlE>e}O"ɖ]F V_Je '4RNCЎ[%Ľ ;ad[xpMtAktmd1 ]܂Z\R5&[x.8OӬqvPzɚ% *rk /yWi/ooSj}S=yOiS=YҼtq;LpIOǓTZ]] {^[x\rӷL]qB~ijb3Û9f*wIv]z#IiE#f$+( 0;kƩk⬜ݣ7a}@с&D :ãsY`Fv[Y:tr𧠿]>u{Q3ڛ.μi wE-ZDѴCj`A;{1oK J󀮽t*6~? m\瓕el[8(-hnrܣ6wZ<ƙ!s`B6ܡ̪D|u1b!KYZ7Hx_"n7یmZN~U˞u<݄)U`! A},x,%g2Nm58K1yD!HigQ&bBl)sª< a UFT]&EɌ2h2"l}WP% <f͐qx*kiըmEt*T)3  )aVI!y )& aMB5Gn&K CCʪZ[*Wj&_v? "Vӏ{D| cDd:C UrFC60 @XPFg|1z1]묦%ESu=.4d=<bu5{/ҟ_Xmfѽ\g#?Q6f }Z!/x+OPzg^ Ny~[}]Gmee,gAwmmKz9{/7+gcc sp6WkTHʉr}gu")L&5;#2&QHB0*iJmm7/xr@$SB JbF2 ,Z#ՁHOmYin|m+Yp>D}aTqufF>rj`o+įeRAi"h0t 5X=`(Ac)H)%۪{CErY#"*Ԣ*:᭶&H]Ҟ-U1%(Tɓ[#gߋ^Gij*0եe\fPoSTf]Y۱, ư 3FFpH!(n C([h`;NCR]F.BMm#yQ$)aNsl?H >2ơ˹8EIr] |29 %IvGw{Qr̮^ﺽk"6,S\ w{'H)c<.uuQ^=Npy̠f!,`nMrF -C6]sZn7ij|Eǵr+-$IRI ԣY-'6DB-Y.*`wItOZ#Uh[]۩G"g5^HUasڜ歱 qOVb <Wge.+m  jj3=#kJ,N?amr@$RpX)")x:d1d(cϵ6кʌv/wLB{p]l;%%j H8B$/mlGjgW$Y(ךY ە{!8 edQV VUQpԄ?{Rm Ri??_L3M ;yCFN7O8[`+u )햞80z uۂ> ]J~'Z2Q h2Uۃ*b17y{^`'.NE/G', 8?˿^ȥF-0]Q 墤 9)ˣuQrVk7ș*AqԠ /1x!/".]ތ}Z7V5jfy7PvuT~miΏ;?9g^nJ1ߙ^J> z ^vpCn]};'bko*k>A:EŘAGNrOYkMLIH4༗9vq9'yg|yEi=6L>}אnPv. 3fټMŎ:u.CN^ Fu$ N@<)5hIbbɃA6nZQz}:+&qzyDטD7yrytE}~Q&hȯ~m˸puWռe(Ի=]SsQ-l=T|7a-r%OC^$nh. A1#"B6;w. v\hi?IYQh!IpD1ց Ƹ< w@ϕs-1)e&YfR㵰H]Ou42QTk\ "8=҂F0e9 zyLG_̠Xܦ v\U_r0>3g])6@+_Zo*"v*%ӥ<`Ŗӥ<\0M0jNQnK1Hbt)}pi+$Xqu/*S+ŮLkTW*#H]!]erJG츺juuʼn";؝z5 (WT쑺?~mK]!¾L"2t+A` zoU&W}QWHL%N]Fu%@|`? wdf.};Y8Z('OV0q,1'ǯ'ozѴjp6rwTcP Q Uۏ~npާ=;$L.5;3۳{jZrf E/U3L}uo??^7->{Yr$F_ؐD xa t%L$z ~=oM(E[P1-yJujB"a4OB%Mދd?J0 xcR5__cכ7V5wzǗཝZ)}/ ][ޠ37 Ǣ҆Ou Ձyh^¹H ǙU)$ݤ$0(60I!ͽf/ CdiQZH'夳KqAKtTȩ\SmSZuVCRAit ZD6>d$pP8q6p\"c(ʤ *Z1$@˾fR9K+qZ/piř6ݞbXz[6=K\|]%-.W13dI lDlY R P,2_uP$˻PgݷD:L)B 6 ъR- BP;ACdOrqTm[XaB(g:y%n\Ey?ʅǏv[._Ҵ LZdh$])"%b#~pD n :AjW;τX;(C}va}q8Ṽ7#o9X#o(3$krԸb0ge6q/mޛe0}9 M<ӿ 79dTj3h]>!wː̑%#R@>!BE9 JT_|Avۯ?|:ONtCpuV M$(d~y 'pgO~\h8lhaj _=.#Gnqmޏfqʸ]Ksr+g, 4nr+flRu+CVׯO"$PI*ٴ􇯁oW -Eu{MDZw<<[Ly!b782NVgUH5vuw}64b_o{<%=ke=U 79":}"X*'lVD&n/s;7:09뤫bK( AjLQs4((ՠiTC;#|Vk{ , Љ#k,Ka](DFMMl[>Ԥt͉g g¼j"&EL\t1yJ&GQ2qQk/r{g~k! b(y0K &8Ht&.&'݃I19űE֩BU&V4RYa4pMugӈvdB,YUkG "աZ%`⣥ȏ(^[WpvDABy͹oϖ8ˡʿbޏ_xkV{gַ2yL1D1CN1}(mO@>[gSR,Ѭ"J{yU3||Q|ыً?âXجr`Qm%jP G&\K崈yn]b< 7|[lmnK}ZcN<R$W%*SEJf,OsLmV6e&gbk6%}AjVg#j6y\ 9C^Ls& PK"#_s̪9\ hZiwk`c }cꍌ݆`a\,zƒb͈ڵ]ՎFw\rVAX\Ϳn0ijg˫wʧnIBS[HeTJ$PI"!SQnC|aF74'06*P5TJz$nˋ|ǹ jwEm36 $SSEȚɑ+R)C2adUŅ3=芇:P!x2@/XJNM q,MDPEfĩnpvʩD6` "v3"Έ&Q{H\Uk9xc۠(iL`"p$^"bSΣŢPѣ g8s",bMOZc+ZngD6=2>y,5n乸;㢛qqm3ef\<.N͎a茇<<<=򪜯Ƚ$M|Ҫ#! =#Eo;%CYozis,|A;Err{ 0I9yH=;T<^Q VlUlS1Rl*`F/KB}5j69l剴G[TĹ*5tD]"Kb*tf݆gBrҝz_bnнS_WܑqxMY'?_S%!dk-O`@ `ZmNZp `+dDBJ"`S*JIS-&g\OI g?ʳށ/./Xެ}2}ksܯ=#yNnu,Ww[YoXwiuK=KW~Ҁ:e8=C]wL&syo9<QR$Yijx| ޑ#PW&$;!W=rDPҽ] 1(>,L6eP5B,$d d0.R4hexkeBmt&(PsEhrZf|ǃA<4! eF(Ȕ,?-= XeWu-*/}P X"bڕ# a?ʥ *fҾIR_).Y&_cA a(mW?/Ҵl}p1r6Y! 9zr.HbpGfK4 LIXmlB6(U'W GM-7#m^{0E,XrH'Wn:H4 *;p8Cd78qK(tYHXв1k쒸8JoB-,f&7 : )LA ţWCNQ69ar ^|Ľhz g7zH|-/,8H:sܖ?*,8My?bIw.3;:; `(Ɠ 2o qR9F(U ƘUЊŚF33zW52OOB-Hk2T-cdڽ}iTl1w{vfZaZpÁO ~@kā)u ctJlVg:AާJhI$4s4cB Q5fyJNEU ,@JښBuFs0#ISJPSQyflqz u?An2})klDh{׭/ C~Rmo|UZzupz{r;/;:i}씷Urɢv{ŪN$yw{53ҺgwtNyPpU \M}3Fv-nۼ ω)eG(Yyno~$\GxSuPBձIWg=BEBPu-HG[8j{'"s\7_2yG;<̳\썝q,3:2ȿ2`fsjVsѢzu6J:7?X׫6.x&0^]ELt 3`TNy$nnmYt`\mm+jy[!_+ U=(K2bH7ea9xtb ؊0m. 30GN@&CcSN f4zmLJ!j):q)g U#!{o4j6T%'kDVY졢fB 4鷥0y0˓㽭CZ&Q]6r=߆M;LT ɭprAr2⩰ɰ##Q*W=O\\}3yR-YJx-CAjM \ 4Y'"1ht8>9޾G砨jg޷ TS Eq ~=YrUEB|x~Z˓1ާ#6/A5'ޞ)&uAvK=@]I'$F%E-{Q[ؚתsp52lca?8#}ZvˁOz2ϯ)myAR1M m@ $ڑug_fl-nx!26#7h/=dYn:6y?]bd->YTǦ'ϝdif~\~s|ھSd}kz2^|k|fΙ*3'Vi6p@'vˉkwis ^^qI:``r-EYŤ3L*eS0LΠ LdXFExqOC=3P(+}y ˗=pZN=s>בg| .??MC>lT@PMQѪ!{IwHl; 쇻Ip8`_Hv[,y$Of&WdVKm[v0jdUybtU#]=zw(8Eqe&uv?$ &\huc\׆9NǵIYLв$KM@iV;oާd 1K9[.m, 1v@,k~x&KAr#',K5J闕`PY4\+& 7PD4|{n[P gI-7')y2)dznN>RZ*esR 7'gɽ^ɑHKq<ػcx˜$R1\ŭ!U9Ӧ)jy=(B"'C򢭬^0P@<,5:RU;6RkaՄuU߄>w?<;෥BWW ݤNo*ov=@?Q)JwEģ;Bю+`pi8>^Y^sPcq|tidP@sRB])k͊nnE4q8q``'ZKDBqcsCzN`]W3ɗj"X$׹Ijըzuh`:| &Z|4Qh l(Lc->u.>[8]^V{8!ꔶBQ``dBVhV˾o` }[3) +X)v2*kTJ]e)^ z yM1H=7l_b>6A#~M'83wՙ7D:8;׎Qw.0fauog?\^_-, cqla/4u*Y\;=̶U/'f!`E&yNm{>LNx 0ECiű#g4(l`4p?!wsz:Y\N]eiE7`rpW] !XY˝; ˽ۿ Un&wkiW+MU7F+GD vNn|7MoG';PJSqFHF䦳tP%eiAMg)Bc7z4B4澂`\i:}῿=_(g[ZӫOc*2H30Jj>^/u_ohw2ʄJ:咢&$p*jk=qx{{cc`"&@N'"ĸ"zyhC [pX M-)vc:[6+>6e\UQ[I TO8aH!}l,L<Ԫ|Tm0h`ɅD(T2tDH4G.J#IC@C"e'rՋ%I](Lrη5>|ԩS `1>R4DK;CPkqy9ӯ$)kjjP.v/mTXjgKk(%܎o[2 #~TfI9SXQs2[[Y\~2'YZYJ׸@8v>-t8/^67mt˃V>_޼}6P?Zr&|흵z.uvJJ%W %&2536RĂ& ahRhm bbԦrrIq-l h<͠5>M*?[tx66r˝M}37?O>L'-TM6/^E+Zmz,|4͌o`wS>un߳:f1o3M-R*9 a[r%YʐPQB.;RFyFӷb^?<4聆(h悷<M$k^69k5I,|QLnEɫ.Oe7?Jgt '"sy a1&8qcځ!81L 9A"MMLDbE 5r %6@$HO!ZGHei N)}aC-&nCyǼ!,!ccg+3ċDOozD%Huͯ[}^ZM_;;:yS5~g;Zds:_vqϏ|{wdpNj-DEʔA^rVXX`> gFubRBrox!1zυUH`c)qb &tR=c1qv{rX/,B^$}qK@Na Xf_f a]H":*k b5_fXL?8xēTQD"zh"%Qu)@FA򨊮G4:"Gng`TH)<5b-4Y8Y8ZW:iɾ~Qr_ܦ`4JhZX@8ǭb BR턴4y"gOt}Ŵc_h C?< []&^k8~M ̽X-gkыݏkWDZ لQA~TnG?Zjj}j:fW3@ʀUeyU>j"Y|<p~{=m!nם'勤Н|짛3kYŗc*(U49/R&qKG{"$)IM)٢ 䴳FD `ѕXV[\:E JQ*d Y3b8qϘZ"8n?;Y:~)͜ab`,)vx)V/**)+!.nRbɔTu+\Զr9Q1RZ>oJKBŔsS.ZAz )hE|HT mJ!-9ʞJ 638 1Đq?ygLDT~Bz!ԣK 1UFw;b; gBZ8r~ z?SS*N4"zpϓz=`~$rJVJɥf9$I YiF GjX 8ttdK$HteoL jЗSOҗu?.M/VMJrhHmBۛέ~31^+>]uUq8'پ_:tGn n->E݁-=]rB{{OwJ}q<hxq~:]Z/< >Đ<˥ R,<@Y\ R(KkT߫))  ^ƪ@J1ʧ@ttR#Th#] l4Q-]hB].5V "}$V(g+S/^Ӊ6+l tzaW? !9?{u3jBz\hg$TF ^yPnB/:GKqȢ(Бpac煍{z6и2nh,zHkm$7Eo9yhEl68@vd0VF<_.[mږmn5٬_u IC':fg.]WLu̟2q^l緋!xF_}-n'(,CBّ̪QE4G{Q;,w9kMkXNJ}I}ОH%[)N"Ȅi_P)S4X b4589ɇ<㏑g|1)R=|c2=w}דEu"Twu_L7Ry 3U.Xu(e@Ru`Cu{ " h0mS"< "Ա0 3dP\7PY 1LxMT{ JΈ҉21%YD\VIʩxzsJc륲flT7Kv;a~1[dE8>j 3n-/Q _5ᦦ{g[W}T7Dʨ#SL- lT0p.cY LW9O9>|Bπ<\4 iֱBIYCNdJ`D`b5$&A :d$Cv@&S麏:6M!p?3r|,y[@bebI)4kaŅo L_(Ujѣġۄ)jmٍm/d#īN|`{_Ȗ$#CRdSP":XٚSCɘCd/$UeV0~mn⏌vV8x(yj\;iVeG[s^=q6GX, :EZ]tVYu|~1>'m+V߿rrW0_ܻĝN[2xT!Z_ޗ_zn=g""_Œ\)HP` JK@C#'6ታ߾fŽx?x:g1V4B,$hTN"&L1D$@ A7_oMO26NjouƋdt1/yz J5!4Hx3g/g}&_nUf]@-d*& \ʅ;ʥv. w!DB "+(Z3R>9 (R%$"֮wKXHS !.eg p:yz)08-ENF8۸ .pyL{&Jڥqf>=eMKOM,'LZfz,S2]&1_}&jHKxAVZ\U\/ՠ'뵸0zȢ8T.uQ-JI.A`;t!d%-E)V;^yKzԝi\]O lxȈJ`ш@)%JL RKVD![',VlWU:Cٱ)>q~crEƓjH+|g2F[ d ,\)J^U F3 2%׎X\Q&Kk*-I"=r#[kdȞ!X4@pwe|yvJ17+V(I%j2wʅ" Č.4DOfVa?jZ 2M L)UN&!yHk+P4b7AXԭHm>>F͞#^F f<{s^KsLV?~г~ euoFm<x4q~.9R ~ԯM//I' X$-Wwkd]+u]HJȽw` D*,ɗٜh/O\:_=<*Qp&h#. {.n9VӃu鼊`ޝ?_}iGi!,'z zb#oQZ0Gb{q5XRNg$eʜRBH(,1 (U> py1tO2n_wq-pr4l*JZY*s2S  %\rS(`MŦ=l@5ـکfc➁Wmن>J_ypp }P'}t_lR|$[@v1]u1eDA^iA*sQLCQLQLɳ$ٰdEڈ,#B)Eʹ jWgCYWݛ:W<: =A<닸cC1F^P[¸}T%RXZ*re>.`G{2z~RgKyN*&݂+\W}`U䋁+:\U)zpS?:^ \Uq5:pR^%\@|{CUJs7k'5NNiց׾Bc >*#ma z챶NNlf!PKjzGV慂0ĺ e)>1[lr޷u(J`hu\PHS(M('0QE[udt>m.َOr:6?\Uy;ev K%Kf.%-U{Ba})7@@;q΃g(񔝎+!C(%jRG<+U7fܮ7@O/F؋կaCM`W ^"^[J`-vOؕhd6::^-v; v v20> =z7P:ɤ eJԨW*CB!Ȍl?l`d^ bL<=-wSuC k9_q䣧u3~:0eq.fIl dc !cZLAQQ0a.ZfS)OWU^ F緃}Lֈ$-EJGr#KD40ttHCKQ."f :6ަw`'Wy;{ -=,\ AtL^[cHwUp Sf.9QQR9^vđW7%08ߘ\; $-P>Q >h) )/rFW],V^_{giO*MzG2+dV껭|`}9)oSn^=bݕmKkˡr [uen ?Pgzل/u>-tq:^ Yg~?iQG rNu3bX@^iZ kQZoԽdi5HL4Nc4WoF%7 wt^mW~'ElN}ajĹqfXld Q]S XXXx.QYe=;˃e۳Ͼ*_d<-l "F)́ДBs KB[V r7Tcs*dMel4 &[(l rELhu.V܎x.F}Q[7Fm=`7xQbD.ɐɆb!/]TҠӤNE؄S$m@H6@CflEezH_[NpHU7q6Y AN aRDkן!)Q^,42l䰻z믪 cMF1cC**A=XYkcw[~sV/q@Ľ@Da,`YUrFcmEEe,[y)":!"h&!Q*΄ڈFfH Rn#bg|QYP>:;%E1.\4팲ڀ22yB Iia^%h"N?b7'[~9w;Ʒ4q+#[ul//^'j^rr÷v͋DJ|W" {ۼկ/&|,<\~37=v&066Dx,PݟҲEoomgQm/%z7K>^ 0J4! %zEلhl2>W"h!-r%Oܼ4"|޸?ƄPRւȷ3%!I`Eg<6Ol2_WG7vwX%Gi0;t1ǻqGWMr*+gHOAZ-,.;InSC e QH ?5H[kTY$0 8,ATg蘺KA)512@3@ꢕbLϞ|>nd;_Fu unV26DGDVZ~ BN`5DN:U%4K1#*T231H&LވR&kwxx (s)lPA0Xt"ǚϼj]{>ul68w It_ʭ_aloSJ^otX~ IJ!fbv1;w1;ԣEL h5(YM9'BQ Y'ALD-pݦ)^D 1$T1NNU.$R`ўvJ@NZSt%|ܧxB.Oϗ+ fjȼ ={zrq쎇].vf{̏Ŭ_LX/ L4P>7.d"hxxqFQ^e'^\Ԇq05φŷL'{vxR:PTƀOr7>m ߗ|dN: P̟V[{Wv &JžUJ{^*%ѐW-Uq8<]9y/]R%F $JnȒh2QЈB,+;]Kt{ZSE=\іl* (y|!)uBS,SQKW$ZhS dҚ$ Bp_!e_d2N“ʌ6`/:6;gc 闯ԧp;_ddY\07җmZ98>dp'~}&2|exw (#.`SHi rl^%‚Ou./ 00`K( XY D1l3% 2BJ!N cJdrV1 +dS2.TQ(Ǯ=C0kHO {i#˔CI'<_<IZ!O fE)Q[:VlF0X'RҥDX - T(`y*]qRH̗  #k=5f(*6MubB뭲 W%`VŅoto&X.C%Cۄ.RԵe7\g]ȎR ^xq&EVDȦ $9z*MٚA]3Yc/wHtc)s+.qBZBj>MǷ-u/St j іt7zt9/Ɵv?+X\"-;,sTA;UءUc|MMy-ֈ|,jgT0_w4lNdpT!_|(\ܹRZLJo#OD(JĽIAk RB„RV(J0ֺ\p <8پz}9:ؠxy}ҁQ" JΠ XE-s(imjEHQ'˿zuzurGڤ%xq#ܗck(6aǚk;慽Ǫ=uPGPS4X#,Kp]M E0`CbB&HMaӛ=i-[oVJ}I۳9=Co%3QCt(%FM u/YZ>xK@nfJW|(;<'<-O~WatF{!TO8KQ6J!ebÏ !V VLH5ѵcv1)at 36E,,$y*֑ΚZ%cYKU5n"By|y`sT P [ZZt!%)2?uU_J?LC S6.̉RQA'z~d2EzTk.3`SB"uHqv o8AR;A7%>1h|9ʽTSO‘&|?aгZ 7ʲQb|tGwtjiWBkUgp>'i@G -1yqpFM$たi|=]2k-jO>[튓h4=TOn$O`ֲ'Ek{"uO뻱jm7W┧g->wg卞]9>_nk{ fë^gu!/uvx"֦FgeTpt6ߗ_HlhT缱t(Ox:᷿w?~~Ï?O>%cVND0"1 Gp뿾eeg]sˮ֦Q/n~.߿Wx $/8MEQ{Op^5}r +,6N[/6G)(pr2BOmrqIDcTt1I @j˩a˔:&M~ȘF3fC2fcjc/jrg+HaK,"a)EȈҹފ,$ /M78|#ts_A”jWgqYSUFKl4.q54jDfayUr~Qgyƾ'' :G+Q)_ ԫLfHi0-bΗZ*- Wm_~AVy`0ܱcE("`}(;QncۑAq3)e`fΧjs_R5#zȮ[Cn% @CsF! +Kb tƢpESѢ>vԼhޡE`b+:CW.BW誢w 骢ԮwHWJp]ުuw loUPZƿKRZ$9 z#۾ZHid`݉BpWv̏+}rqi# W]2 0@w|v4]VYQہބ(\+NHz+UW誢GOW!]9]`ughPJ?]-^piՋ˃/+2N^|]ɞvz@GeEp@W誢骢t=]G, ؈U{t /B骢HWq4 ik- aӇA~ńULz3D?a0?1郿hNIL?c눭zk}Q](3.M/+_>[v\D@Lr8ϣm)2EW8kyP&8Nbk/'erΦ&tG!7>\"N8ҁj\}OsCF|8V8ew?Vs$'1L(FEiwl??;aih8R*Ȩ(rNO\01[XjY< -A}k!3dIKNzGj B\58[?}۽b|>.&;|^SUig3&dEJA:T0 D:(Er)<lUP3r'MP\̪XIO){ɒ@BbTZ1Ɩ585Ub'7^KɎ5^W6j\|"Do} *4'gOZچ#a<]N{OeQg Nx/37YH.D2ɘ|p6lȔB*3jkJJ%^hkS zr* "l)MN }!j[UZfƮ\rk= oվ;TiH/Tnmvz3t˟p86~p&F`jb 5%9"X.241A@%N.cK+ kƞ66QcИLPȶ.%£mלm fQZ11[jUڽ܌e.g.|j1Ҕ.J)JȄVU>,JUdfHV445)E6jR RQG2e>l f>l<\\X3bψ`Da8LҮ3un'>Q` :[ aTƒ'L25AAB hC&AQ*τLFfH `X[flfo癆:/NB,ٕM˼{^yhg1CI WF&oT=pg}RZؠ%'b=/[kn| |t5r7 6E ̯=榗n7bC\_̊~ 8z7SH< x0&9},BkAӷebb.f7'8h0kwueK3:ͮUt9/|t)5imh6FfEYq]qqldВOjCI-q<cys؛}C1ҲYʫ_-~hKR-_*fthtlZ'7MJ1T! ςH9^0}$؎{(,Uv $WZE"7hHvcyq#pj^tt>io\Ͱb^Rr^g"P>'m8gL/Ӛz<7+{Da"D0/ѱ.zqyZ,ym6rx PP BTP`ZФq5NDQ*+T>g'F e.bӚ)U) UK萌Omo Ζ'z $)%13RMt*<|,,='25>{ CqSfqR6+3`K ].c3*c^"B=D[+š}q"'_TZ (@Džrgj&ތqɘ~2}DZJ&M'($oDL)5;MlѦ`3i{ B^ƎHƪGkg~hY{'?N۪g8엑1/-46/)E٬ ƫX2DXֳQUH}HTg?8 ԉ\/&I ڕ AY!Xn k"ǐ.sdflPЧK$SXNN0d`CUڂYzHH?Hn*SF76l¹kh^<$ǍOҷvqNo\{@ڑ$NM;JlDDvdDgyƾ'''W8R64Hz:Ǘ5*XDU#[Wa/ _3~vըy8] B{Ozց4-Oh?o}tLe `IHςMHYؙ1H^S]sW]8\x5j+kn͋Qd`%/>Sij Dt5,-yFZg1kU6Xp9 C>D)9S@'5O t>lȕD9yyᚷ-#ѴY2*}ϸ,8^f]=tmNOx뽮t7׻盧9nhËl;ݹ}ϽG-ݼE-Z^Pooy(>o3*|O<ohxAR]3SsfI_ط-wG s˭d+u(q\gWw%qEkͱg3&F>;f[w%9,)$ʹQ+`#5D - fL5ӹwX<o>8>]z;Ru%!J6s ZF kmH&8g?xmo =X'gB_%n(R^DQ"EICqd"9鮪.JQWYчJ2pJi.՛Rou}|&hU4V[dzML0NzB9S$ BЊN#D#^q:ŰUFa,lK 1ĐLȜ459 c89%Dn+H_*, Z;Oʒ>&$N8ۭvZt]qzlilC B91 9dxAx O w3|U=4O(D2d,(4' c,QOZᐶ,NAh]?YIxWAKpϚK=N) ˠɘrH6/1Ҩ ȹRNZ%wx9oeu}-ס4 {-yDD6PEgݼ}0 K3V92,NzБ&*(R!@>gш˝oJ/rǺ\ m,zbqtX]i*abČ;&aA.A}vs8>Q4KiᯙHY܌ۧQ"S60p_L޼,\#&W//[N57o.AHNcޢ@Fpe;lȹ P2l`&Y *>C=)+9Dn6F6a^ninm9T[zN7^LlrA?5\ؓc͠:5]1^^ CY,)*D XM.MtֱQK2}y ` w2YNv`?4u)-t wV_ᳮ$ "iJ¯J;H/HEvUt.l4,3CjB6&yu1"[=3|aWޔ`qO`ReJ+5q~`BΊ%]9;^jn*[>Dִ`V<JZ~-*^:Z:n1]w~1W tVp/bh4.˱aͪ՛JW.$ }rBb--47$Nv@[lx:jv%p`ۏ'ŊWBO* iѨ`AFTWy^A^3PϨy{'9^ QpImAb*XK CQw@ZAʎ%F5z-d\邰.ERfd*c&"T!s5Zipȹ,{U >E>]om`܏OS iis  f;亓U)vuܿ{LѽCV!B6p&ϲN>+u?ɞ_H{빁)j4)s]!qR=4~.FdIEڏwX<HÙ`L Q|ʤ/>g"'xg*T^psYmJ٥vR?z{Cf2eGg;;wB 2!XP6( I0+a$I;c)CE[+4fM2oΊ壻o=SfY]滸6%#1]Zႍfl蒈Ȋzj`1d!sIh%^7: ѡUZKj5HtEьiHHFuwpN+< URGZ} 679H c4Y&!@Lh}>8c )I.c0W5m#HDrfN1K"BitCNj`Yra@nj 5+H^KRn]/sSsҷGҤ̑B4ۇZ 7|^$ۼ7 ;:b0\H$ A8 a=ę coŐ,'rG<` r|FN"H73?r0<;H[>_j:$z^ R&!׀^Dw\8( '|b8r%S;5}靻ZWʊݯBGGW%foU lu 9G(m]93Q]{ս ^֮O?te`v1gs {5KNp4rR=u5u#I\;G:[7 _?kYf$1 G~,Xvpe>1gɦb6:uyVBƲŨrܜ2R>]rkR QuuXo4=+7o"ezpq/O߿}Ûo8!Ў'Ib{MCn ۟Z4647Zbh|ل=.#Jq1= !>~fǡD|ͮKXծ9->ʁFڷg_!IEavRE+u'YUAxG/u>>]/6G)մ׼#:F9թoZ<6dD_o~td\Ld12h˿]먿pG?et`15O) ,DCњXӚ-0뭃Π{j\?WF_`[P!+2җK-y2LDGae2 &X"FZB1x` Ͷa,c7flsC([0`XcH 5ki'e3b%0M"j+C2*4p]q`-ZяãYA KXTL*H< 9sփ3[Љd"aM9t5LaBԙڊ᳡o*QOtUy QTfc7^FS9M!MD4GWd.ဆ{PY<ԹEQlMXpЕ=5mUoɈ|JsÃE0\iWF&0diɝ7a@c܍njQ׳Fgх>ĭ 5NݼmO-~:{jvء=;CGN4 4y&%q&hy礂 *!L}H1W1pĮ֣uֽkHo=6ms( Qszv[-Z*۲VګiZ]ykS좁zNӰ7ʎvփ^Ki黫uK?]_Gk2d ހogm< \ܚhkinn*t#}|8:'kYcUX!2+ fevY]n[vmvJ)TFų%䍀")ĭ'E2zf% f#stJ.%%3{ܶj!*ו$k%{xť'Ř"i\~=H@BHQ6\^ {{ eZ3RZFL&Z5OHKD>iyYcE;uTp8vztCoR,ZC ǵ-:\_q]ezZu} z2}ª5.i5I+&C(zISTqd.& sh]?Ϋ֛[:]R9bh7f!̡e^nZ;=4yҢ畖l2(9qlpEW}tKބ˪C]u[0ܚ/ h(5[[ܴ9(=!`j,U˭͟6stX%r6u=sZj)3ln%9(6i to?C*0D:0D,0ቔjIEŔH$ 9D  `0<J:jqy;sBޠ3 łS>焌 hU+ Gj}HR q "#IB[jm]kAmoNJVoyvjk}Nkн.A"W1+3R,@uNPRƩv:aR/K9d<a,냉KM-a$EV 1iyP’Re-G=nOSqL^"E"\o^0 Z(F,NZ9RqB WIB1!e,!$Fԩ`E*L<) )RR9)h5D*g;Jaظ="z{LϜ6]WM]ٙ RwVh=򩙢XRaRO vlCr`Ü 3vT|`܁K"騅'|hX"G!د<l~{Qk<)!DdtBq౨<${YB R{mD^+UXk1')eOZWRt]gy=fGUe<yk\W7T4K̨ai8 SH@@h}amƑqRﳪ ߸fMݪv '\Z]-'+,}QrSi\ҮM\}pbE8|.#MQy7B(`71 Z$n*a7%j<{2Bj[l(ޒ޺=tZ8`ް@ZU,;6`STۤeH\0XtӭaMDta]]-8n>xnp'FQ**QNz+EzlkS# D)m ]$# __MGH܅g0,pz}2GV4x.T;1Vo} UP\ЇV,G<#VYi]W`!JL>J@OP4͹`1X>MG]ԯ^r0mI 9Zr;Afx~Uڪƾ,P<߲lLaZ>  ?YᵆWH`U*?.*$%3Ynt9D;+eau؎eYo&{10" y ʛR 2Cwy:__Ah%gՆ("O/#'n]U'R1L N˧#3dKb_>}tqL+.xθF1 dGt_μD覬٭[ْFWESR3񳄲~ tѽ48Ⱥ~=khV86䨣wX,|C9S~Oiqs0f7\U ~ K&yU]fcs-}I3ia~^۫ cUu؇=>v=={XV#d b@(:Ɣb'WcŚR*e4ŢDu[d (WJl A2 ǨU1(K)O࿄k  Gg_tFNKrϏvoGV!_STFýcsCwmzZz4Wp.l5@$2k) 60'S.#  ԃqbCKH=z) * K@]T")9IE$HfXTqXG$ۯ(cRvqbf#)Qq%FSmI 9X` :9_lzkhjH\y,bĆ|VxR-zH0pp6ͅX$9n-#킲;\ &Y?5.|w=̳߫KR %ߙ`&ل66t>U6zP {(2y6w>n!ÞItBDG8WÑ@-C2Ad3v1 ):H6u'(¥]s'a{0 vUtb+ -,IUH.װCw^ǃx^b~2~|fq\zN?ljJ!RyBgg7r9Ct7tZ!ߙMn9*Xrj Ή 8>F*q>|ͩnU͓~-]uUOޔ^]/?^U hI̥2ppQ-7k$Ɠ/W /w/AkiƑ˦aH07Y0 V¸5~2]/sx9o\{mu1ɦQ Qt<}~\C6YWgnu 1S{⿫Άq ؎|Ûwo_8wo^X9ȬOH Eo!# ׿v64*bh!u%v9qӛ^C%y# 3,p)ERdB$(g'<#zh *J> N. z=#$fBOs u5o5uZ=Q/cb$i1ϫPKg/~ n2+Tdj|/ʯ/_TTS/r2~ M~M%@+xp}}㨕{v8* s2-Glp UK>+5rv/ yqVM[(}@ `JIкzPs8Q-rY0 hlͼβSѝU-J@bŋZ<ͦcj6.*{d6B,ef,rzn̮[:M> ӷ>` f[LFaU:ʫ?*[.:bA()np09;<Ncʠ̫qc=_XNLLrt.PHx3jg>OLj9+j$z$W Ϥd~ c}$<_{~,c‹8{i,SS@\^i-7R-̨Wm".ҁH(D3xqJ%с[Gvl'ZL3U4Rf3{z},7^ o<]AB !/U׷;yqc[x=nt*"1"14 %ե5(J=uޔ) ZyT2 %Zu`sa`PSfoSb\`04 BΙ0m7#0DDf0/Y`Z)RKR~O6TKfZ Sdy✪<vDZɻ5ٲ.ba˸/y7aKBEhla@ۙ)ɨ}g]݈]NOڞ|+Zv<yefEmee!+P(]bHywB#wnä |t)Vr---ZFXӺ⺋U@4\f*D)?qh!)Ȧc4' ?d Db DfZc `7Ē "+)e%95ǜF鶌X]=*>cQAep^7o`^ùvK'&(o sYW|%e]wÍzteL`Se 췍6b TW> \IwxyX5?S涬f'CIgNMgo' z^¼rhxi^w~_N7 (KJEr0=ꠒ I.Z?1g#mkxWY;f{p\ D`LjoU߼ քC:[s צC9rF[?rFFOfN{N=XUKW 8%~]zXJp(eW"N 1X@p5|0pp5W3Z:\(θW~ʉy?]_N|oע2_ Ro<=鋯Q_lȵN,$4ۛOO=Z_s֒OR7#co |7wmgn6؈/G?oWW ceѵ⚿?R6u!o&MeQ'Mr\Ϭ/B! o7&66ED:%'tf0q5i1χhfAp͌[W43J&U4ߡQ\̀ \!\PjFՌrWdr@p 7Сe:\(J7 W_I~;W?GV[w{yt|<Ht քKܻ%>, ~]zk<拹n(>l4|;Ǜӳa>MbkI8o# G5unʻWccW^X|oswҲq=zn_W3uY4;>MbZ15 (o/#ea <.krCވ?<4fc2О uc׽%(XVomsJI$L[l ?q`~]H?#]}Y/gsG_wmUōl1iz% n ޵<ļ͆=YNΐm 6"nouJHB6voZ*bmޛ܌qȥ4Ww,`sg`[m7Ey_Ϝrc4sdhvcSD>|:n\⼄S#e_J-nH k{]!4a,fLpN,sp1ZٶJF-9 -B#-Ţ[- }*[ wlP.xrøl)S1X;BcU>1b5)YL31Bd!8ΙRRmm3|fF%;ck u}v65p#Yn6AZ H@5 4O!RAx]ZwuRZ h4Z ɊkL 9ڊ1$gnBNG#Z`쁘% "#-k@]05v8h[HZGuid`4H~)yQ" R(-ϰdRxp-e[N|_f%êiD)S,EJ\,dp/C^@[Fjcm Բ,▲k]|cҺ9FOưfBHWLʝ-\@6$TȄ@$dӚG1Z((*-wci:|U X|ge` ѯ}O L+yX;Rl#fP5yb~0gh¶Xo9! z B@ %H +quڂ:c@ϏE3*3LL@TCG0:Z,pPRH 3HMBJA42~A[*<S_@LA+d*l.Q#/.0 !:+4 1XոUYHI(bmM; *+=dU BY( .RDF@ByfUϝ{y =xP&{"I tf4g6]ףX4 J(No* z}|rBgmL|cS?W=̓!+-x VQ H0B4M@-$L>g.:p2@-6JySʚlMU@c$O;AMrU. lEf>ca9XTY$@fZ&kJԔA?Am j @!o][pqUcdX[Y!3 JVNER!>/C$_(`ZX- UX*<3RWh];jQ<:Ic- T DžVǓqy/n9=;m/2d}8+tDE?1?eq Οp v'^?;tg=t7q)=W ^'Jc"/z W37ogG̢p\⤆+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp0*! ` W67o 1=8p+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRs W3t8+;Ռ6[7\!J7j W󍘨J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRwm_&C?x $>-P&)߷zfxH9H{ D8_U\uW]UpWɩRU2 Dbv0W@-{pTr\}WL*F. . . . . . . . . . . . Z k#$9͞-Gouc긅`.?2hXP]R|i'> %`JLJ񄓺o!> АKt$lA^j5+. x8;Y:Mʓ! Ղ82ď9=F4;7'ƣi*bhssL`N(z܂SD#S\:0D-.KƝv4iHK?KNR?9~a@%Rdgf܃{#?YRG|130-Ք/^}gKI 9Sːs,[IoEkAw;B0__"_~z'c|~<4=!rla6ɌW_W O^'xB4H+ < 7#RAP ` G.\^9EfTXT`#)En~hZ2̼s3Y$YrNy(_gD[q{~c gʯlR|8Gh6dmSe/KyfoEr%~3y#ߛ&aGú6t]?V~g\)\wPŰW+]i/K& `$88l1v^_P}ZgW.o,}^#/t7_FE~'ەt?M~[g u8xU5qwdo|-ёHKG!f>{nh3?[Rbݸ*ŒDi5 4; (֔&_R6HcQ)/e캖F2*9l+KZ 5@ӵu+H]G+|u:Y?^c1h"W ͧG뛞55,˵%&gD0T0K驣:% b|J- {ؙkSsqݏ%C E >mVԛuGSJP%zR0T`!6XXRJ99 }Oj:O+>6H\y,bĆ|Vx F.¦k>!`$YH%Va03~5 DŽX;pߵ_ROaY[SѫA8v#FoWeqA7Ϧ"헞_Ep/4T4 U3{:@ "L)N"`89@GRjLݑL&Sã0yl%P0\b+Ģ\:$Uh W!9'_DY1w8("tK>\e"YZ糿J)t|ܛ/zs*mt9k(=RB*CpNg Uycژ귳t#UO~.,h!34EߝWsKbnDY1*~7'禘O 4YcOi iFn5bQ)`,S|T f=Yw 0\+Aw:dS c4:IaGѸ)bUlT]YFYUQ}wP\@MbGmV(,u~S] ?s+ZG0űV1D,C8h%'(U+ RיR[E2WHYu2LwZKPye)퟉hj4"\muFΆV;E>P]/2ևwFB4H)CV.X-(DZTa9+bDR5n6jIQ[#gF RTt>X 3j_Z-QIW [t|U7]h|͚o2dlMKIMfݐZCɗӪT^TԩhqCʯ(( []ZÍSM{AyLSPZB})nP>RL@KJ%R ^`A ƜBm9%c{X5Ygl+ [fe* M/2nT=,?w\Mt0i O~U+H:98`iLUrA$9gGEXSض+4$e@ZBITH'X,t;t$LEL5KTm_uEzmG_R1EjJm޲]O! 1ȣ`J(rXhG`u>"}gLhUF.qVq@?dh@YrRl2G:hYFzyX*LI0Dl?:IăD"D$"vk)%&6F #aEP  <"mU"j @7N5B(IXobv#4M0Ivny 'yl+erQtrwq<6Qhrp Z".7 cn-3#(˅!ѡRq7:؎\ywl+uPm'@͓nkr5ayFn{mձr(7p1{?{ K)SQQ*=͞U|?|G.ǾHb:5ߥZ.ي=3`ׯ-6xWoI kYIɫMJ>pջ류xZ3YN(ln4u, 6UQ7Gn|^a<|n@X'Gܰto5?O1"a*Oq@q,,uW)oTV@YrLu,($*6#6݌V i!Ʊ tǚ.ެk zgR_kxqLKdX1s"#T `01!4F +CL#읉1VwӘG(Bk 3hQ9b0XJB}!9wJh4M)vuwYgVY4OdD5˕[iqmA 3@S6WH^FDo }W+7U?}7qvҹvYʓ2Sem$a〣a6-.݀+1䬇^ ǣAh\8/ԡȋ&xzȢ D0qYkx:0'6uRv[}ڻ;TGS^X4D,xUȿiapڨDUf|7O(!%S<塔OR%v%Q3h'AbQ4+nqŻ2<Y̙C:<2Auv]2-Co2K.wXTnϼvv+޽Ԇ^i`k.PHx3jg>THJ7StQok^ʘU~s?<,o_*6d_Gtl % ĺ Q]vїd|]LWkx_> >щUy9~uyuyufonlc&.c\ *El <`!DJ[#rup:kǸdd,ߓSZ^t"NGo mŎ@qJTBnЮ*4n|)DoZreEHZD.j[: S֝vOO+dLHf2p:H.NЊ7&EJ%ZF+XƁhk+RgmJM=!&jrQ߼3&H|2B:ڮ9[V d.lE9KlXӏ W%s[Y; /e\Pus쬣m֑؟u$7t"oD'$" EtE͵ő.eg^+B55ˬDϊueW #縸6qd9C/38JƉNYb8/h._4a'}崷Y-Kw6~%\Յkז&ldm%m'AC5"'T<з!n8+>_EI.?+H9p$!M3y*O{u-2<2Av.}0\F vQ'.c}{[0(!-vP2!)ۙS{#ؾ`nAܚd 5ϕ%IJ$#3FPeIL[=R^h"1MP )K$)P5ܢ'c1p@ИPJ9ݽCcu u{u`Dn%m:cTm6w{}H)xOuez{8-^~7Yu-Kl9{ۦZϋƷw@yRj77=m1]x~b+t~KMs i7iF%5ß75ipUshןy~2POro8)*#A~5D;n^p|_wseJ^(boGU'"(sUQ* jOќ;[mV m9"v'-GS]Znqm$ aR sLx57:M.:7k#g;v#_$~4f)-fa%:KUNrcXVjb)@PŒpbOkLv\~wW99h[zbL#'9f,f 83 F2xN:tHlWu]M鐎1%DȏټӠ~U2vo{X<3#ES((4'Մ*εyJB\DN#E#d^^ctS ̕u4d2r u@ Zΐ!]%V* :h~@s9E$/)ZF^n"|<4Shx9\ZJB䈚4>]:ӎMfBg|lT=yng$@%/@K:pPhB)y*C-b!eh %mGdsAڽΧVA@TВ8{bqROFW̫GڭBD1)0&JTd\go' {0B_ iAfw?01tQw2vWGv'%{ %it |gzlGd@-) 7Zk\BKS4ONԩ2n'yYA <%9#3Jz ;e$$4l)x;9)Q/^.ߣWdu&ןt~t`Dhi{g~bzB>=hr'ӳNB7͹gf*>WtaP`p%LKc{|-epB rGBo Xwvu5Q-?6ƛR&b6wo0bD -7`t H3{ B`O p @` `x֊ND!c`3L; ),1)e!Mka@7hdRq,PH VH =BM) s_gg4՘!Ȑ'5O}l^5`je#sȽ69M ^NBl2Y޹2J$F (̊hXϣfKqj,y@"][\YWr"qpdU-7lI4&%7.1dzlp^⤄PkBɂ7gK,q$Nz꼥΀aaAWFr`JQb\{ym鍕FI@zjILFZBPƹIB8P$2AB,<#:j>1eJ*SI(s{k] Pz!mxwڷ&KrLC n;L5G y4\ǒUԲKxL ]_>uGU&WMnYvHɁ+4 zꊣLfG2FBjCWWJ:uhb ;"uegr?uUo]e* TWSMj<8H7? +D! $SҬ*O]!b:\W/Gz 6Gfw+|qe!Qpi9bYыYq5f3Yqm 'g'7~'1%V ZW+VV?\O 6Y B +!yuuqP\hI4*4່?a :xSk%9;}g%a[5QkKA*gs1Iʰdu'tذ306xY {sꌢfa V2(8+|[LFi$#?6%e%sT*\RX|`Z湶rTI%8)kHF-OYc.J#ivNdC8 lt\)fTg?<V_%YbWn`vJRBULh Oa^x/{-8:"E2RH%xbJ')y`NE9:d#c/ZIC:%(at\`RxFLQSB(*7F+B[@ܒ L&F]|gE܆Ot@f~^h%N,Tl%#-OD d15-6N콢PsP˪TZZxn\}oc)IMOν$*P}g]jg)zZK[nT{ߟK|Tb :FS/ ~ y1uF2&cF&PBL.٢ꊌ|͹2r5zJj%|u%S}cꍌ9J7,/B쌅z“bDml3q}jpȳnߖv=w 8_^4+#)7Д(h&eq4R)Œ@C%mfS؆LEEyaF74'U˾o(T`p5L{s*ocAnRԶQN=1}o R*JVG)됌v,bsU!gq i)ad[a@`M.*9]4%'Q0J~9WU;a7qʩĜ5` "vӏ 'D< D^)n;ֱ6(Jp\d=}%qy߸X4"zԵ gNE)bIk&zzx%j\\k^g7-y).θ&\pqC.xK*e=Q+M`1FPbN[uMtzZzc>8</exx#!Ge =SC|5$uyǧ}| E$[k -EGPR(q#vo-Xk]-8cp""DrH1T.Z"xMD1*zt?˳~;^\_ߝ}>};s_1yoY:5^$?"'yOҞ,|u+<1wO(i,v߹e8ݓvA-B׺_|TW`Ͼf=΁ :qħ*Jc9k,,{d(.Xh'=y$9suޕ:¥Y?ǜ50ox߅Nu>/aŎw~֥Q~u6QuXgY$6N!.) ?qwtV>vv_tU!V(H&0mhx. ޑyr6!AH;!rDPҽ] U8&&HTͥ) I6^x/q;J 7Y#HUI2o:KX^^k|8ݏx<^c΍ BXCq gCNk L %(Mh-F+X{*0)ʚilqh]!O 5{GTvH{ 2&KRکiLs,JUŢUg]v>h X"}H1@#dSa,fdKEEP LڷTGJ pVr*ܣ%bYVfpzME__\'gm9t,^^$YA@{u@>T5cVwp\VF5֜4/U"ٗ8s>AP}j Km-Ĵވip{{\9{9$})LX}oZIo|)d`D)d`jtΡ# O9UU1Ur*:J`TMU: $*`,i'X2x>͎2{*||B>WuXV)MlDh׭׏|/bȟT۽*ik=߽ܵL@mg6WJAZnRHƇ;_Vвo1Gzdt6u׷F'^574=6n7m^/vMU78YǠZ+T[vq3*Ш<oqHD:4剜3npz7|0mls!o+^{P%e o Erxtb ؊0 NM}avSPBÈ)B3DK6&F%EVDTW`ո3RU.*"!{o4j&Œ5"{8G\KcB3"߹^闥0ሸ󽭰CZ-Q ~[?`Ϡ՗[ǹ2`bM!A6 (dSaaFG TvY1*W=O^~ =jNKkrd*e 5S,($fHBVtrU~B.zLOBδmw7FzUmj*(Nį'K*ڹhCo[O|Տ2h~e=  HQonƇ\;^N|7^7 b}IFv.u(6l 1sqՐ;5 `|WM:3UFYfU{p 9 'Ykd*q4sWW4O/Y|߫GA*FsE̻EnW)oz{}sBzO__.[P|Wן]n?|kxry^|H|QVzy.ԖzTˋ]`ϲE\\ؤ$1(JC*Z7/Y&ZXv%Ztv&.% l޲j=x~$ɇxwC\h?uRuw $ZO_|#.s٢|m^yG=?~{Vw~s[:U҈?R#u0/ѽ2Ó-ΟӡrW}[V׻:2Gƽ+/-,spg^c&8aqL[ Ge]xtIruGv9āT [wx=6_ީ:LsUۑYZ6]ݶr=v gn!5tSU=PZ9$9s&kik6p@iGrd8׋t (7g]n7r["ʵe3%1;GHNk2ɇr.40aYn?$[tֱ:zxJw}>ݬrOG^8z:uOv㷺ynLJ6?ҬM\ORU ʍ4kR4oX5hkIwO?/gŷR!GBMXPMQѪ!{I)G‰r$s$HQѐ"*jc[r !2toUNB}dX}!C:d&P3P <8cDYE.XJvvȻs}.i'GԷn6xn 2]}'f|c>cstL۸ׁT;_Z3J1<%X1! iMsQ}}OIҽHR:HT^3hn?3#í{&EqT ` $2e .X!cHXb웡K9NƒC,Ze^& SH(km7g ^wξE3&Αx|Dwmm$-'#w dlpΉaZҚ"mR-T/XCRTS#253쮙.rZJcCK+=E-ؾݺU=<ņPWΈp|gPFSJ1"' \ңHE PZ`xJ$[dLӁ&=S{0&:Fn9Jc!3I2[' oZ`H"Г Hzt/{< U,~;Q0]؟1樬x'\HIpf$c%LJ.`WGErO? ϯ'_ܮiAR6$SVA'J!#Ivzau(&"E[ }b;1F\6#x9g^*MZK)JvhI- =/ߚp&x=JG_C"a.섃a g(\ $|;$Gϑ<K@8Z?NN&L$≟"O旤/Np%K] AȌr-3kr5=uONeve2 ȏ&ҽzףU"F~XB_xƗPJ2MD)uyw\խ qblf։3Z>pƌjy;sO˫Gˋl]uX=ۆ89[s?OllyDt˶؛3?Y?1N~ k&%;g.X9mf8d'zTAo<8tsgl󬫛욵^vy_X&0QQWFvǿ'7(M&A5?OG~w??߿_w?;ݛ?y?Ҋz0tȌ~ x j꿾SjS|;LMͻ^>r&jrǼj^%n8q$߯>8I>ǒp 5nw~ò'WF 89-ً!Zr*zS7z2^VXlz`h72 CZ.7\nuް<1yvТ W+C7n}uteসU,K9BE;3??e:'ss2 GY01 9xȬLΦ%ccv \p =#+ܴ'`a ]ݨPzl>;F)F)2&gTG#i%<`, eYY&]F;uPG&p{ۤmllCO3EL<[\\ߺ׈$cAV*gtYrgl10{:)t R9g_U is<oѩw9 l#=ùZx,.Y߀Ϧܕ>bks؀-;0E 27Y Y DWm31T$k*4iHnh`'Ri]4DO\O B%D&{dm`%ŕ&򀐭p8Jd)C;*H'Φ .V_uo*\|vݜ7i2Y-1 K u "ksJUؘ63o]Zo}'\Mܱ}wۼNo|s$|C6t+dZjUArizO+QZA&/`@)@G*<rOH\=$W^BG-rP1yeL'PdB&"jJ8gO]+a*Wޘ^;Û>jZ-pT*5DOg8]ueDkB¶htۥb}ͮWx7L?M:J#*q } *&Twpc Ű\OAf99_Kf[>lm ?zG2~7ɋ1~iaZq'K7wC6=<i]EK&dʬ;-"z-4U+*,d Щv(,Tі0He]A􈚦 1 "Cp e\Rr&sNx6jQe)_|^NS*ML&aYɂ 1qNX a UFTT8^*& .VgSB#Dpa6VjlT<-e8k!Y3xj8cJdJ͜\ήhI{}谈;e<nC.m|_)i}UMNɍGǽ raJv5$&xIV"Φ6HdU5&C`㭭MY'Ճ6)dKK3I@III֌ٮ*ta5WV̧ttᚢroW2^l݀kϯq_\xi|ҟYc3#A%A9CzՁY W|ոd_h*E=A/nxY;,Hbm2"y Bs' J{#Xxžjܱ>tO@&?}$hC𣒀;kFŏw;׋`i&8~SG,M5G>RSQiMjG4CS_ULh)?+?9%#|lJ7e6~@iuuNuN~T^k-%VlaB V}`"r'C.+EEf.k0:@}*h,pwnz >tP"76\/mev~Bdп $$ruz8'-ټ3>f; ]Е݇x/#ߺr3?'nvsϻNws9;U>\-[w] XsuKڴ׺f974?e]sH #K u(lz7PF;%](‚iXȢ]X z!'?mc򫱣{WF/vgPe_s9ގ:V푺amتK^vwD~1_c=R7'gCR{0=2.Z\b+0)|ؑ:M5aMٹtxdY?@Maawm}YggrWǛŖS! 3m;Өhc#%M/%mIݐ?~oޯs,Z1Y7k0C!2<h0* Y.'ô҂hRCZβK^AGdh-SqfV!ġvZPs|5/?L!T5zU(9'4MӫYrNﶽ\z]J9rd~s|2AX9/ȌwA1+3P+c#, iI2xi uBNqCNڥ#铤\R!042%Xf8@謕.ɇEn!eLG/#~>n+}O&M~C@R]*cnűV1yel`Vf:i3˭J@S[_T-W-c4I엌\ƼVdĤ 7<띦/6dem" 2#a,e,joZ־.k78w܁]EaIjO痳WI/N+knH/v@#2-[^ɚy+u 11}I !`72jMOq:+Î؞н ;`{v"@BX)%FӔ)6 ̔!%!U]oh@S(s-"N7zft0 ^Xw˺[} 9U|8.gٟz{YzqBvutq񖮇f/>^#M &X*\OF^%;kHE>ג)E 0TF2(nʤHjr$ݚ# 5I 1ziV@GTR*w;$fF9Y aN -Esfͭ$G1Y R[=trn`ɹ]H/rRQo4`'& 9D T9G9DŽRn&x9)) "erbrX+/B"z%Q"+ZhslrHk3X>>;#yl@.J . `bzIzV9hГrǒDn}Sf^ۻ\pwks jǿ©$!{:L<3("[++,Ŋ`aߝE?29w?/6}a|vEvN[o~W^n7z(ZpNϱVbĕ-ʰBXR rTq#@@Ή8biv/_(F1,@#+iO=g0|T=4OșN(2Qx/btX ZP*#n#E$n\/}kc '*hAqYsckE$equù^q%"ba֞c$=|Qo'{H7B_2X!:CCu'PTɮko'pZas"kKu($KכUo\G #tp hI^z 8-Pm " vj=*G> L 2J l%)Ŕ`SXh#+oſ9u*]m OTH0"pQ)8f{M Cn2kQ_J{]=?hpu r2t a{v#Ek KQ8Ѹ>K{MK ;g~"E*q+%IUr`TOtG/ϙ^<5q~QAt"ʉVfp$3z.~§ܑޤŭ-r?l~ԇmfklUj( .7I6Z6/7h,퓲jW5F45+l:fYEnh{A  S7W=^PYxqry2WD/ ޗyokkNOUFuϮ_eU@_r/ݝ=(!n)v͵v<) GJn>V/:4=ي ڨ51g;~3ZL\IU1U{ˆuMܙw꽶X88'")hnB{Rt+N{3/٘֞1RQv[!({Ep4g:E8i(-V3͍F-=ߞNMfLO@>Tߝ3BlIN [c# 2j~^lxj]tiӅ>͆cb:N޼ʛI#" ޽Z`v[neٗhFp4<*8GcE%MdL~Z,r@[3pnm@LK|kD}K#X!W!3X>@Eȃ'Q+SfprS^y kJV& kYVoƃ'^oṀ)tmeYj\{*gD"`js2A;5127xR.8Iܺ\^-#@HitL*"D2âҌ;d&p0 AP$@2q^wWy,B'Cvf7 F4DEǕJM$B*0ci,,uFi?r iQؿo2aSF@%K-<X(*-z s vpP7T fÕ"FlhGng(3h%QؔbG`#fC8jи8%L??𔔿ǂ1}bqָ`|5ʴ@ /E[ ؇A2|3  "!ju3u-3sO@BOճɲfpUtb+ -,IVDvia`wNpl1~|f~T×lh %Ry>TWhyAW9!Д:6z mh/6 ]>qGaTliLYzmU"0N1s{Y9Tv(]L 'g?^?Բ'V=ꆬF,2;bB? /Ӊw'WF+A{rU+ PQ,Ed% SIs`#Ő bUp6(ͨqQ茑5 ec~8swO^߽8?^x8=ݫ|30|k h+nCFj uowMkvM۠k!u5Ks+}}_N^fnD췋/,=u( E$l,Kυd6}v V+B1leU?_ Vj'ݹ/T"!f<#CaK P[l +cc_K-Ne7sHB9ʼ5G"RZ) Db^+ *5jCC6Lu^^}_s'0p8"rHL,ȄgDYOט{--ҧƨSQgCk{:bVs(ƶig-1^nye\ո[>% bA2KI#rQ`+Ynt9D CC؃4]hyw]pMu˝XNA{10" y f]Sr% dKtYa'ᇎ=X. +_ |$A%XaC.RtO^ـTR:7T-9KE!\ɔF©3X7))J_ʾWo@P ZEIN9wHgWeŤ+yll>A76~AM('`ʷtN%]Ä.|*POF^j\G4 m7w<2fO@A??yq;6+`9@ӌ0Ja]*/3/p6b,:uE{ym`Η(:M}V1-FX8e)'/xl[䱮k*PFQԧRK_6y$"kY"M[-2Ef7Q/ Tͭ̑4g1"`(SZE=j9Br!еU10 f1g*sA9+j"*V3/R+R3Ey:aLLP"AbO`2%rdPKr5Ԟ`5 \Jbu(*QK^]%*oTWsDZj zj<^|/2``LPb/?w*fҿ))FԬp9,<ʾugq2M+ ? inNJ_ݤ_^[ N0MGv2w&ճaW*_;_Mk9jJq IMC?@{~XVSg`?d;UEG.X%-h&_-[*e+{3Hr=h+;N'U[;Ng@*??NcE$4J`-4%#ge9.La[+VUv.eɞWuW̡ڂyXmy8;Y0VY)HNX1LjYl4zIRa;<x)c[@-߾V`&r<uA!*P)l BK7("&φf4Nl  Ryd}o>K|$)Fre9XȭǤ}u-\+L m7X0fGtiR7Ⱦח<[4N;=6Q1(O];< wmq6S$TaVmm%:y x[n\<]%{崋^WaC\gP^~Oj'_4(7>/Z:oRq}G2=OV=>qP^ESR,[ NO%<?Qkqx^>zV3*6 Ȭ6֨FVL@=ZIpKXw̾M:5ps\_yyw._U=Khbtѫҩ"%Z+ӓ 5jSIimj,j.#vAY^ I Zo) H>7=;٥0i t?hS^kk6%|F{箷C@-*ؔΑrdsU0Q50(pք@Լ}kO,`S9MdNeNon>Y_M>]o?չe}[;QxqCȁՙ{گ'Z?'UWrkqx ;"[G;P-ocf UU!CXlp \wv+R\ <8\ʮT:r%4lldTPm=νՍ$ >$)b}_J,IR2#)2@l*+ѱr &γvW#F}HtaF}W~a"|N4$ ,aX)TXI8F ̖}lUEWPjUr򦫡T>.X@ JF>[hr:A*b$^kD̂z(u6nl7r22X"i[g|(-Ժɞ--qu3%P(LIbofR:@S+R, 4T!{hf6ې({Q.kmCv}Jl(T`p5-zc* 1kwӎ}GI $SSEȚɑ+R!0XBҞщu>&& ( C@E@/&.(>J*IP]|a7qyP9kPGn] @Q!bm5)Qs _( Rx:4Ƚ A%K4ȢZ06,#VbO`+Q3LU9xW\Jtb{=NC_O/>.7;,w7ר{B9_S~zϞ/!$[k xjICIY $HdZZNZp `+D"p!DJbNb\+%MD99Ũ$"뽇M$x:5wǎ־~.[ʧcE>#˷, Q(܆,1&_}Hg7C`i|! 2cف ŁZ$[=#Gr)`(ˈTthY5RvcW)Wm%lE Xr%Z~ݕ c )LA B5OCNQ69ar >$M%v!}'|SwPi` '_yqeXjjY5Ӈ~$[ࢭ㈎i4[F8nM\}4ݾݤc#_,Jfo9 3AD?+7o9#͇c|IHHGcI4܏wŁPn:1X<9}(F}Qݛhć_Oc`Fʻ;g9{9DN `~P޹KJ9Lirz+ 0UcTt0YZ SUΨ} F޹J::o^zZ ?.-C/g݇WDq'Ff^H7\6F}Ow"0M"9{go^lˠl޹Os>9ș݇x/wޙLouts.=;1<0[^<5r,TsuSnk]mޝ'2I T!dY?,¼QCU\=\^t"9!s6u?Di@q`IHlBpC{s~œe}l}~aZ Ȃ<0xGjل!) 9׷[Ր#RE>0HTͥ) I'+BgCXݽ١寳zry>s{]jN$-MiZ(:ӗذ BA6:xu("TliaX ;޺r%mX hA/o SHȑ̙PsP7I`7qv d)[J#c΀EX4kLY샊&8SKDV)mMli|1\:[i,YRQzTc5! QDڊ4%j3lV:,)b Wl6cBj$B4r:\|t=Se@B=ehj9?,ާmeQ=KTLE!;@gƊƸ dNɵ&oĎ h]n/m>)/d_!ZWm~1wqS%Ü_S@E4|@(-S.xc(ZCMǣQ 0#Ghwt cx=.8r*# 0r*T0CkcbT QdUKAAuu rF_ʐS<F[1N#&j;nZ,9]#BLWC7g>$`gjL|:߁mrֆ{z3㭗-/᦮>CK5^'{eǃ4Yd\ $HOQ*O3ޜ|u=klvW840MoE# .{w!;N򤯇K܍WM*5@)_Y}c2M4!f.RtF7_W׳qANr$!F%E-{QYA\c!R$rLeZ8?,zti<~lGI*FmYdZeI7y~n&-sɿ}wq{ߩOtW~&^tɽi2#N/dLgEY˸8II *7/#hݥWiIs&{:8k=)DgWYK$qc-m;g{8Wf"V x )1H|-ou CQH&--tWU]]U=aHϦ |rA? {f_kh#(\ߙ~K{4Co'ņ: VC[ f;e+ǬZTd*:TkirK=(mV9kUlhZWF.=Dž9>կ*>ns>92[.W+qx82}jcV8> aTiן4%9:nG,LLRWE}R$rn4.5P̢ o)sH)xqʹ/v { UcL4/Vn.if)47ʝT2S9*vg^Ŕ@! ~H׉K+2m):;Ylm,OV磂o^^:ƛ'yP/yY7 oa!gʵ);[RDz MqTzIC!.!o`;\ ;t}iBX)%FSay`ekmq3eȻhP1֮Ql=N;Cm=3{ Gf/hX팜ub\%~;Ugcj'SǪ#V,s}?F.+tҺT畘U0l1zU!(bLJ-(NڀJKs QAa ]SAtzLЎGޫxpJŀ0#QZau9)Nh<5ZR6HcQ,JdYYӞ8t0i벖iY\{/L"XKa09" b_=yP#i@/HB9ʼ5G"D)Bp D H1O'Eڠا +3|3!) URdB$(g'<#zh*J>KN.uv*:5atfF 7mvvu ծK7:\U$RsL=x-m0+dRP* 415W' v= W{4ހDGL`.P #S-2XGР< 7; ]QmGhE a5h.a/(u4igܭ'.$!<Yg\&|]~:?Bp',sqP1?G:__ (EB%ƗFJ`w(paP|b=7R@7{{B@d^l{9ؐ^ JirX29YӤ^9@TzkR?]ܬMǿdEUG]JG z)0,`C]ɔFtj9~f~B9;e'mrqvvvkڮY#ix}NE^HGe6MX|^O ~y׳x1u쑸%B{# ];WZFv]\%*:'(B=W@boU"W}W@m*Q9W ܕ=W@0&toU"}W@-AbUz1#qkF\%r$.WINDf{#\FJҝ׮hqtW ͦ~:s޵rqZ,Ks/'s7t 0RV ~1"S7h8z>LN /1)(Sy 4}qG\iGڷ@_l/pŴDR9 uyqeF,e&Y x1 ڼxnY/r'1L1R*.CǒDn=&7#aAsss=h2rwaOu_Y$ ;EN!- KoGQljmQENC*%ߤ`ePshs""M3e)ozWWۭ-:X|U22㋟ëƽaF)P^ݪjsʒuѶZg|׳u`fM/HN[,=)^=s>'=7n@Hu1;^r-# dRKJ1 lp6HFpB Jbp]?4ϝĊ"Qt1ri=m^AboOrѯ@_~$ aO;3W>. }Z8L^^a:)P7=YOIk;FȜ{ UcL4=N99gf)47UT2rlΜUYlvUe qs}kDa?<8;XB~y3SZ§X^C Zfz󬮃^誅W:Tb.=ր>KL&~7?$ߐn/ 8B+_Yx2[n/ ;4 KɃ:$ɭ$h.'q;lvA*CCG%+*apBYC뻏-PmWg+=jM5g"846Kg奷f4-8ri{㲎rGwE]Na.sk'b6A30[nsdtE^J+FH3s˜#HB: W{Ǡ.]=o$Id`P 9ʌoЌH&BR`@o?pBğ&iq`ܼdER` D㬴>F)J ? X` O^-GE"Џ "_jK4-ȐP ,&T"I?"*5Ҭo9r(\!Ɛ . yB͇Tw"sh0r3w2EYy2@X29aT…_GMѿ_R=wwEiAimFY~fbOmK $CI }r)u/RF#l^F_|ƧqQ"YG8HDlL1ibMH#73z}4<;JY\[_\+V`=w1Z+!.27^Ӄϋm|O>)htᄂхkHbTȟ?-k7;_5\)`ɒ(Ԋ\֤nӢh޼l1ҏV?W~n/|,~qj-J3m<ɧٮm&_fm'nzigB-]#K?8R zhaD0:YW_S`ye?^/fxUͣ.okԎ{%Fӌ#e`E?m Y2 <*?qǿ') bCtƓQ_:鷿~_??ϿV?JKy'>A{C/- M-; Mͻ^< &krǸ77ޒqX藫o&/i(K߽8ݬ3T$ҺټzxJK{aTnd8 ALWv0P@T 3s9AI7N !z"3o&)BY+T21Cbb,'ez@{aiv|ZϣRࠉJ 򥦟̥EiND裴r}D-f˟PʡNĞF=uͮKwn.=_R)[# R%^m6HAd4t-zj{j{j2 " J"9 3[d #aM9 & ܲ?@jB ِgڔž';}!96 ALWTwE\RwHen) u`m4p\u7n%c>1L 1}Հrq:!iYΚ'3e8&ջ؁^* AXy֤fv&R_Y9daYsmزzB^^uؘ3S#f^Bj 2AtRcfZ%j(64]&j)Qa~RVoz3Mɽ3m-V mYDi.#S|Lc61E9>%[୻it|g%1( ek$+NVR&g$E1>iV"T\WבM<7P"Z.+|MFvSz>R`Y@413,9D-#Iwe2䢕1m0C]:w4wvW7y>N tہ߼ 9y|9 DGo@`& %|M51RyaBTHN04Y0+m\89& ٚX" c*Y1RĬ=&v{ؒY1}cE|QƐDd:M]Ek4K"2H,OV+6@8)fm"}@H4f$1iF:!kIW.N^g5-.vQvq4QB$CiZ9WރsYtQi^i'r`9"8:v/vkme{Ur^Og]#wcW sF:A7x?#@]8!Z_ywѥIsyv' 3;xߨA51s )] 21J0/=0Wy) & %DM*.m \)9ֹ!ݏ) K !v{x{eЭBldfDXAi;{>fl`s3Xh$)I\Ur=zg5Iĵ;2%U'399n*hhR#}RM3AY͸jPqa|Sf7 1KMxM,ŠREзom BF$ڱ)َ$Z~΢W 5l>⢼[]}xKR5-\uqK#%ZU$0Hx5H\%kUEj^HiP ֪8cw<V#4$q5ߢ#flf=!DbnJQcC+mEZիOW^C6!ޣu. ^.5B{_lbBjNGw,K8!hk<؀AЈ 77wWGWyZFY4i"hEbr⡒r7uU}9A/cp%s=8жx9²Y}6'& xfFT@V\t:(I!FR Cw7gCF׳|d475m!4<#Zz7u/o6[Uһl2j3ġwgC0wLKJiv ap% јwpcD:τB)1$08sxf2ecpPC 7 l Ύa&O')y#W<DZI w:R )dlocaY(^()B][ZDHPyzu*c /TDS(cIЈG](Wao1k=V(El *C\+mup:b׎2X|l}> }g,lm-A}wAre{VD<2j82}8B kc?Yf ]e`BW-o=]eRtutE~N(;#efv*Mzΰ!fc`d7yMFq9:7_NZ(p'υ)ޅݫC22ysJ !b>=AX-B@ʟRL'׃^RWGw߼)|Rz{b ') TY/3ֻˤCTpiv?%;vi Q^kjE4[qJ0G';;0bJb؝։,msU1%-kv=`%R`l#2vQJKLJUtg*5BtQR G]Ȁ5 ]!\AtW*tQJ ҕF6d Zʔ`0TEQ/*산 ~ۯ|haTh`@T~3#51OϷݚ|n7Xɇq5ƿ^z>3^Nz[VA餋ypk_ xu`1K4/E=ߥ7W0oiLe{&Q1i,oyuopӻ] &doH0,1R  "ʜJ~łuRiEZ2\ezZzpgM2M[fϳg f%/-K@ro=u? vƌT ,?'b㏘̀Ͼ:^f~_jvyZSV٫ݲ /ۭWKZ}Ӛ ѡsP38G5jFL`* (%B6HW̌@Dki;]eq{:R+ݥUYw+2\!BWj=]eFtut;UXW J%CWfǪ7gej/{5~pő~h{&i4{ЕUI%ʀ ]!\EdW*tQ2d+!BUHW ѪcuN$:DWUkxW 2h($/:~(]!`!pug"ʞN݂Xw]eBW-o}UF)XOWHWXõe 6 3"(#oI .`Luwh2ҙ>tF)Six*63tp5 ]e NWҐN4cwN3N;t2hE2Jٝ"]H ]Ifu2ZNWˠ+~A.U;# rdǤcվh%caoZڝQJWg{zJh} =Qz{etQjD+MDt2\h;t=2Jz:Ehމ'5Yӎȶ"Yd)YкΌ$8s_WS+T}kV`Lo-ZR_˭;$qv &9FH HV~#Լ'()B\]+Dm;]|O?zt%$,;rI cHhtoMD0NTtFXi:ʓi)D*3tΌ3gƴbN@gLpMg RPmbNJDW!+BW-k=]eBtutegTu2ؕ)p]!ZtQj˞+ر(%>:]XJ}dǥ*~([u]F]AOWOz :DW E UFٶ=]]QMD4Eenh%"rUS4.g<2VsW~-VmEyz_uH6@=Чbɞy?˧a>$\ . kތ1A;æG7ݼOXzT}2k3нVipBiO w+83xؾ}R?@)`|7 hm|k;z&h'8񹼞zT gݷKw1"xuxi\>԰࿱vzaeX]-óUɢ?-)oٺ9~~U"YB,gh_a7 {5,)6-ǣȨrc~; غ5W.懻U^ZȷZJ-qf0El2x;bP5ގF9d~P9[ 7g+,X9V0ԧ^ > b.WQ| sVGl{1eN1.ud~&,$R uAcy@A CL 0pig2Z. % SnJV+C+w}x A1[8Bi֎{yRWkӉ+I><(n?c|%N ^M鰘`4hRr4I@Mf$ـ!1H $qOܱDLDwOHU6F)D; &80uh"!69kEU*r"=2{: Zgkt_Mܓ}[={OMo]̓~ӛpöpñޠܽ;ך.Qd޽C&}}z<~(20fU3HU+{ғy&" r+ĿG+us a$]|P>]??],ޞ<xHg6'%FC4,*k=m ۇO+;\{r=L'--nNcGS}f4һߛTuŕ3 <X}jԝ-,{.6aٽZ-[؝b䍽;źrӕȀil|Mk]{hulHb]41{וJ/j:|#qr9ވ7Ա %gsH<&څh 0Nej$FYD[{{ҿ-FyIFysZ҆2!D丱@8҂JFBy0TJ )™M.ouhŽ3I͹RiR?Hmm/a$a0{:⴨ٯJ( 1y>f s XE}kbNGQzkM|ӶF~PЍ3r WGΔȾP4|}t>::Rt1KGؤgII_~= [PŚuw2<߂^4Өc[lhJH![ڥmZ3ࣟlӊhXwa-n<'3 jǂ?[jYaGHqZX\6PYDsЬYG 𷓞8nzw/)_S\X2PMLi4 ֠ɓ(%QoS%փ%1Y0R&PƄH88SZDS*KN<)~)95 -j3ϰ/;mOH^Gq 4㸶u94~j qA?{Ƒl_vؑEd&Aw#SLRoP5hA8隞UUux@S,BRp&B b\j;ogV,<1^("H agL&O%OYpKfh'M4Xީ`eX;g]H1҉Mꝷ[E{~jֻMa~\:Dy8Dh hp:DAPi+Z9:q+QA8~"$ Ղ'=0xऑZ \29ye;>#`\QƬ% iHsjH'&}\DL55`R TNVw.*x d9XVYgl?%`;~%O6({;>ηnv`ۋKk6yǖ},s"d #:2,P㙴@B£5CO+h|ytd ngͅ^ij7/4$;rO40%FYzMTr4EcYflpQ&Oк#=omy=.'ݔ:ts]a~,KR43~z\r, |0[+!' EqQۙͪhlE?~SGt\41E!AQp.нu4~>Z'Sr!3vVO.G ~ØfGey{~~V <=Yux0W2D+SVF S2MDw*)y]ڿW\$Gw^IxgS%vs 7*=L cm_(-58ʎ?*(R y\8ǭ]Pa+V}Z?̇>Y[cU>-ܗ?~:b۔F :GYDhQql7+F -V0B`(NxyEy_?L|x-gbmJJJJPZ!Aj*[ &  W!j "uH @CNNred E II'8߉|hO~ٙ8+~-20H>Gz5oAA5ް23VDQ/cSź)C|0 .#H潯i#@ޘ"p"t+>{v:nx7!š)ksh87'WHtBytlW7\@QZѬJyx?m(~vNjV R(y=kw`3|THrݺ=Q+r~k؞J3xIp .qr%qruX\zuQ7[gUk}muIU0FJJœσQOef9+Dh<|V95Z-m= SNѺӶnh{7Zv:YŜ 5W#G1b>el+%{?d[-ϊ6uO3\Gr?oPSj}kn*qN`T_._~wo}]ۣ{w]ѻ_bV {m"(yo~@=o~ڼkY߼k]KS7~{.+rW/J;,~:(|;{1ݫN~rլb> v&Lqɮoq- 'i D|3]HP@%4kCN0 뷥II;Ipxpi"NQt#Ra Hhh9Ddg,靤mc^Yϗ]_,Ktp$K1bct4jP'f\rZ T_N':;ٜ2;xv]Xts vU4KBe RQKm ҚJ)5 @@fP-Fv.v(v"qC\Qj, R33-:̀vS=l7\%}v|֔ę?_8GHi l|&Riv}1 PFDR )J.l(Rʆ$!}Txbع\ϭF ЭdR(#8^>dVCd11 q+oIۍʞڱv!S#n싗gMN“ :AQWʳȃeR'"Y5:J!ѯZ)Ny }MIj9k?2Nۺz]wUw3/C~kfCk͕H]hY]J Toш77Lm:Ow'96C-0qR Fra4AhडZU]׉]8֡6CkX/yiƧmM/21>R4*DK;Cp$s9) ,IS"Oc^D(occGyjW[mx؋VF+>ko{m@-˭6tAGy0j2;9rtr~~< ďY&o&/~~UWM^?yk?)'R<$O:J;t?)RJtLQ/*$a(Yp#VzV-Yzg^ѫ7~Phr6 碭vM4RՃtt6uP\T}u]7?hRQ;բf*̄Ѵx y]ȡ3E G+p6O_AΣm_&cAcNHkaS=1Δ ӘW_ޏƟF-8Y`[gR`=LMfsmZn$.о&=B@^G^6 ZK׻Y-!^=UnqHa[uE7X{ijHQbh悷<'HL&VKlr0kQFgl*w)yl#?ļ J!!ݰc8cځ!81L }sD@y-EeDD HG4Bԃܡ(ӚRD3qnWxw q$]JAs&UᨖGW^_wvQ-kܩҜΧvq%PST=&#*`ֲHTLKBUW  [U5:SdZs$xKCb@zb*$h1\^ A2 MR]#cg܎*aag3c[,8=>(.$6;ϖqK@N}$bgi J;? rȈM@@q9⁣`i„t4)!B98Vx[f%Ð=2(6T^r'y& *oGIȍL>&@FW܎n< +.殠vgc[Q`x$I$:Hke{xF%7B(B"&tᘦqH)&!N #8hE &D%T{cUu!@<:ՙĬc6^Mj/ZptVj\}J |DhDnlHSlgp`qCK19#&QH\džQzbRk_{$l25ݮ:['Wvʇ/m׾Rf矮e^EC5PEJ$`W`hhOJHt 丵mJɶ*PvHDD1Zjk%NR^rBYg(yX7Y_O'yVq]okˠlS,,v`Y>e)6I%]qW l e]3uߪһ0D\Zׄ Ƿ|UkDIiI`x8eZK. 4>TQ ؘdA@I_2dt=MF 0dyn0$BҾUF tQJ3 +e4GtkB@D_*#Q 6 +CզGtvhW-UF)@W_ ]="%y~c0>3]W祫>ZCٵ,b];+B2\eBWw2Jzt(גa7t ]eRu2J ]D\} հ3;ni1.Oɛq8bp~,&㸵"Fd6٪<}9X !ABNFRΧח1278S0w3u.%ebzzOYrB+?T~[cI|[ޏ:nd)T-_P'EN?7'&(AsebI(Pz0Q̘13TTI\%hJG.堣sp>Y!V#˝e)9U, e>m UȎ*`}̀ꍉ~QoG؈VPu;b0_ {┇ `8ǥϓuu}uRT?TF&g:B=^*@8=kg!O[p3\³v%dlȳqiJ0 B2\#BW:]e ^]Iʅn,!^`5[2v=?+Řjoçoi=U"TB-PT6g:)i<&uofE-n\ߵO[=O|/{(#!xQmi|os͋8+W> hu1qF?/ud'5 f1hW/ Wd7n,P=WCW|ϡXy~c {0꙳=o6: %1+>ձCOAH%zDWX ]eR&tQv-m@W_hh} Be_ >y 4 t 2`zCWHo*etQ^"]e[Iʀ ]!\hy*5 +A68;2bFguٶ/68UB{D\ysbGٛDx٨ѭ~DG'P0{C}*sZWb$v[uը8_7Jy18\OF]flQt- sJǼCq\6F%'6DB-MEiQp  F8Bdix2Us.KVO#aъrU]FbUuUt|}48>UK$Esl9>R[WKx]/\mcxqFڌ?jͳ s+]cs\~|m-JL~μ/-~ /ۭ|{[v5f:e R:ԈU#X# hTj[Z>]d@F cqтo$-Kĕ:V:3(PQ?ב}Кz#@zvʱ騜 (S+̓53Hod6J-H %SKb4u9g]Uz znl $ TR$2AB^O5Zks 2%mkΚ6AXg Ec\S(=bvMF;'QZ.8Siܿuovr/3Mɵ3= -Jڼ3>zt͹•9whCFp$5.$BAMd9T궦$:H[P'ecJS߃ZM]PN{a`)ZX T$\lM7ئf+muq&s<\m!"8XOn8\~ߦ^"HI) ` 83 F$t<=B AJS Ąr$d 6 ,Z#ՁHO)Z6E5OEC[۪Xm՗ vw*v~tY33ֻ/M~\hk9)J6 C8V8rtV& $r=bI4$є&DѬp:BVh @k C:b`J TA҇*&hȥ@3CMP3Q eb% O4Z01]p]b!k6 ;3âz99Ϸ|;diߎ-Ö?q:8ʉx.-%!rDRqDyT>] L0([g`yGLTR W FLSIF%Ryޟy>7юPP8Ξ9/Y)i$IxƼ >h"q>P2\_YZaLdܠo'oֻWhCqCRڶFa0@/AYFo__9'gYzMTr,E¶SL: F=A 4~og"t7hjSF:b %$HR3~횔}Q]rl(˴E*IXμ,&Vtraj{vHZFo 7E%Kjxt k|S{x!V_e>W] ]6_T7\zI;BSƳ%ڭtTH8En#E>kL8VqOs5|C"Q8S0lӅ4$\=~^kopJU?sv>†K,xT_>+]^ځq|}jQgZ? Eq׻mBD 0Wا ~_m_7P]+xx 65moyW92ns*m T7;qq H JfELK:>ʟe,fgW[vvA|R9P=\wĪxCl>P\Cs3My3E5A<٤jc{mt=]NCu'a xMw{ͰMOx;a&zV"ɗ--;n4 ]a]-WNTv/]︦h<=2:"O`PGB4 xc >@R$&/wɴRUk&F6}ȞJ.>۵9eVZ*0M%fWG_ ھM J 2PJ״t.qfUB\J*V*߲>d'R+{>YOf-fQڶ) ,)'u^o(KtTȩZk"Ee"MC\h&6"28(|q6r.rF24Cmgy!7a6ȗK-2˯"͏ͷ_=|qٖasG{\3ElFIKU[EL %0jRyq!̡Jou7ûFKW[KġddJ <(T)I@{S+JJD(Qr%@|HjBe?"Se7znagB9 -VpB( DG D0.:R6+P.<0e>}fFDĭD W $ PKBG 4jIAjW9:P<I8ٻ6$UX #%8:M [觤5E*$[9wzIMqh(2]U]zYPXMnRI}8 gBl|m ۅǃy)f߱q_A_ =ߚnʍQb_ ~?΁g?rILؽHF < ] C m_#Q@ xPvtANDLN^Mg(ϏTͳkUG'OHtBytl!W$7\BM:'ʘ˗ 1 `?VןBn1o_M:>7UT o \oji<_|=IωS<>NapkNMy61?kS/'^NTcddI̹r6Jórs7Dx嬮Ԏ;7$uHmðahfys$h\'\W^yv~6[59[GedI2Wu=4fnO|`Y\Gt0_ws;`Vt*qO6l4+RL\ ~܉9ѫ{R1AllW'C,rB\$:VZHQJ!T6$%I(sOZ)O*cQ g"! ĐAZKyvX<,|$\Љfu<,:뚭^#L"D\ Z)uF{Cb{m=>"/E>[*\C-:4ڳa0ys]c1E-{ыZM2jsv7|g "hai) <1T <06@=Dlң3˵n$+`dyE]Pf% PRj"ȝ "tJ%4BhShUvgYi}20RAN >`τ뀙s:,woc^64Ddmq [3m\͵i vw3b;±eձLrK߸ěo5^l7u N^ۓgbyYr$F$Q(*KI U2s%r%p55ieBǽH,qp@h4"gT?$Q&}Qh18o3 HL.YíN*da a1r6t])d#^ւļB" 0,9'A81'2k7'HdjS"Q8ʖ@a9(-Gp$qdѾ0#g=ƓTMP-\XN+ )C/W9MQ-Py}0;YF]6TҜ.v(_T*NZo<WqVYa5`,Fu< RBrd:$Nsu1q,tFbTrKPHY/*daTYX89^T.)s]f͸{6O7pھǟg_%6QSBjf(gN.$EFOMh6PWVbs2dgqX,M.$"A%vDD02WtR䬗n< 7 fWvT- KmK^]9<&I"a1DQjq39`QDLfʁDIQT&!N FpE# eMKǘBxչYayX ,"^" $"Sh$DsR v16%(g)2jM/FTtepD*@lpjT kȍTHVQtRl|U;Ӫa.:풧EUX.^.rqՎ6I&>ރ"[+"VS*k)1!vBZ;X+.c9l=qu7sn=YUmJx+f+ǂ_rn.J-F}vÓ`Pϭ# (e&oDD|*:eRPD UN$!Yz' `\1'n̨@I 1jMh%10ZZ,EM_]*\,^x("83;!MZ22c'y2@pӘTMVDnQA)9%w  D.2)'Aalo%Pׂ֦HJ;Ou>wirjD#b iM`OC#2P|2_^|xLpʫ)&f!V?[,'G^;]cIyE)]mlӶ9p17(8հKTB;𹶵8Y Uj]Qm$"*NYgҗoB]\Ɣd&j&qVćX)Th`i^I9Rgmv蠞  \T/  F8[]T5YYH_ITe>Ra~x7hMr٣$|>/tm2[פk^;ړv$%!e*}(ZJXe*/]ypfOFx" s3 I?K0[GĦL"Br "gr_UIiQp^Pcgx/-+Q߇I*U8hom  5SJW8a':E!ly8I^ze(I;IQ"Y)%HsvIdE9I6$DP <$'ѕu$P4۠+1[+iyS}f9ftݜ›w].5+IX{T6+Bν~#$)xDغsjzuNmV_ լ;Ė\_ۼU{;o|C([<]t=MFog;wWƬWw-bnỲ(P->'Y|3~:5N( nڍ JU9{޼2A ꝫNBewmqW& idZ=(^;K?IިXP0ĭ%N{Rz Ч+NSL %~Qɲy ?+o̽QMS_SQq9Gǡ&^ejd*Md"C">jAM8.!q&}s-6'@kc<(yo$iGG(^] A-BW@k?(]!]QDW\t5Z ] t[J煮vёbjyf@i>ѕwA+\R hnM(>=l7,mfN9 lyP'-hCxst |n{gA'e:[_]_\o:n\Ǵȸc 0ӻJz?&nqe0S!S|0ißx]nWĽ*T}㞪 U&TT1/@$w(?=0Vg <<!s ~e#$?[2z(Zá7 Yjv9*7qw!iWtǁ՛tvw?vU|,wVJc[qwymLO(uw5R ={&rLlM:.N-iF 5^uP&L:4j^cu*֪TuʠQ)jJfl@ժUu<'SX$ku`?/5n]|"!8DHOoɤY3Zpω0 f-V&yߋClTS;sX;+YPRWgNܲ mc6J\ǽ;woX,l2١L2:PafcnC. ƬbjOqQQ!(}tBF]F8H$Y*[VT)lq8A1#3X2.Lf|}!s9 AYUW-wrZ<JII"ڪTw$ף(]@xh15wnE;If)%9oP|^TBZ\.`5 j%}IȐX(prs^QUKcGmgŖbScHFu)[_c 37% >{,R.I 0jA 1c'Ad{j-xԬ;v4F%z](_ \$0`gX#ؽؠtxi氮K68Y @R"Tr]vu %.@2!VN5!1ѕ<q#Yge0И!Փk.keB`t"! ?$XEiޜupXaUhS%vTZ[]z2L)1N-` \:WlCd&0VjIQ221ِMhI6p.`8e^`U %-ku%fH57]\Q?@ m :Yo9豤X9Z ̄V', MGHhP:֝ڀJVǫx/GtIPfmw$aU6j` p辠{lzT Ƌ!(miq=-n=q`yM{uzqn9׍$ݣZ0:{ qt&f=)[ZHCQO=}"JfYDw ÚZSМ'ՓFhׄ bL PNFz:AzV{20wäDy آk* F m}*yNJTcۂJv0Xg4)1J3HπOt0XF7o06v V"4զSkB $\sa~Fɟ!oY^V1^8P 1-*1"@Zʍ}hF8kabt\:'XW![QٔJcT-،znzgm͊xfzPiU{/A3m^'24t VqFb]Ρ]ۜYg[:z&D3oJWfz-FC6(#< VAP8U9V6'4zd{!y)A('{|tHg=8~C`7kΦri4 J.YyЬ{|I4M]hEWKl$Tip/h繍.?!u+jp N) cnXy[/.NjNjOkd{[=k } ݻ%g߼4*$.xMʹuNI6>h{ 9Pߏ;iozq=p'* _(~i9N0x1N u@@g'8P'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qYh9pRKq 0(8 4/U'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qȢVKrYT''Ob ފ@l8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@bh!$'{''G;8 䍍Q@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $Nqzi={՛tfy}_\[PoKkj~U.ޟ_E8.ȸ4.0Jb\:GޣMZ] /ܨBW@nj;Njtŏ =Va䟝80}fz\?/]=}zʨL]芅z=V/[mCWK@ҕ[] >,K+F:] ]!]\T +CWnX ]mjZ銕QDWZ ] z}] 1ҕ3a/{Esx4~Xv+꛽u&5&;<ڭ#1kC+'lO]}_juyqhh!8ϨVsB5U}NRKzf7;pW"Zmizd#4}4بjAt5<pb^1omj$y#zt1`IM05K<] d&kFtlbjf)t5:>t(b8JrZDؐ|$$noӿ_nl헾vUv[c2n$Ղ>_?߷ NV|z9o~/EEBw.NO{^Gjulôhb%ᠽgpo8` 36߹8kmǣڇBp 2kv릩Μ㺛a:%} 뇮t3ӟ z!ݏ{Qx MEκzc]_ WWL a!P%_nܫnpB\1 ͧ4z]poOG =s~FMm"=}_!!(;Pw$hZZsշ[F?m$I+qNpn/; {l2LXYHr Uzж$m#"3N,Rͯ.v{R6^d-~egOIBK)+ݯv`<-7[_~?EOctzA:})t>yfzs|Ly;xkRW!%Sy=,ׂC9jLd揳GlTYATG'u*NO53~skˆFL i/[|}soj)Rf-DWPn˥漺C _~ T6f^>:xGVUt$ԀPyK ZNNփ[d=ckI''bҘ'{V{Qz߽ :thO.ol騡QKj^ :XûZhLhKw:N-8oyg_E3h,uŷ[6e!/ǣDP*OlRpʳw;'4S2U8>%cr%Ri %'r媩415ۼUbCwk pzmUh.8$6s~*;\.v+XXH}G&r/jh#xt%LyDWJoJr,ez+4S+%aZ V" νzDz M#\J@khD4}4ҧ|?SŇp]+Dk:؞Np{w-ǻB쿀h:]!Jz:AHӊ(Vҟ%dy QrwCWH?ݷ`ѺwU.U=z(]t{:TTjGtUUBWVҮT+_tǻBBWVuBtutŕ'o Jo+DkY Pjz:AP)=+.o<Z`Q ҕTnB& 9z(ZJ7zgulnkFް4µް4et%=K K+K9L{CWxCWVd )ҕ?kt'epm7e$],OfVxCWSo]+DhOW ]7B Wހ\2]B[ꡔ[55tu):!eBm-Ciz:EbR=+,?tpU۩Zh-U]+F US+řѝ>^g尙Z3~I)ݽStLS()$[>fGxԒ-œ9_e!Z>_)W^e&ԛ>G؈ԶO0TI]`ń7tpu/ͣ uҞN$#ܫ+k+@+h[%뽫+%u;v> Pjd,TxD7P(M#J{>A!=+CWW_ њdHӡ+W]`.Yb@қ uEǧHWkcGt'gg!BWVɮDtЕ(z; ~p=-y롵--1B)Ig5tu詐 "ZѾUm~W%#=] ]1aɻBFyCWikdwut%czDWS ]!\n}+DfPN0>yW\+˨/thE+DٵzzR<+4!v"N%^5(R6$l`XCwVWRBzEXq M| _hv?F4}4J{DW0VDxVw=] ]Mkwp7thU`DitOW'HW0Q!DBWVT(&zJDkMZ߬&`5v_>ֶQ=t+:]tu詡sjnp7th<]!J!z:A&JR3D\21doT#Lj]i1ۧ"XAxԓ2U;}vG=qN589o֌#-)'\Ta{Fl,c)O\khJ Y̜LuD 1yaL&Mya}J (jk*1puovDkmX'N01(#ՄBW֘ǍTOW'HW\=+l7tp-վ+DIҕTr;6tPb핤jnwVViS16zC _h uODnt:8Idzy90j@ HW\/?07_0.*O?pd_У!\sWɞ~}8ۗU>1HÑ'} 8e|r{ۆ4Km>.=KA{)@[S30VZtgG4*(м*_-YE`;Ef*=R8rG@rK]끐R, ~Hl_ T_ 7u m-~Z)[]cgkM/vVmEy4!|5lYzSɈh[zO&JW251fSj;~ʽ3 Rd4*Q-b|hqK4;ρ~o!KGjnoqc)\(`څRHY&㔲Pfs:ZܑwNI*ϋUX|=ѕy$2_.e ̇T.x?vk899*Vyч86H ?Q>\דt }|5BClRu3lN:iR x$u<1T$$I9-T"p3L%kK u_G5mb<ajp_ΧUE HF"bR . Z9 R.4 ౱#sTp>e &K3'RqL-A@g^wtJ PGK˯zW's6,ţ\yh]VK-Lv{d0u/rf%"ŵyn}N\HP0M I9w4S4Vq+ eQ,'(2N'!ԥY>ubQq RDqfTK8Zn:qcI$*I $ˀ"r0 ,1!25.9CpA!)6K+.s SP+!1 1I^xZ@tPΑ1*r Sj;e̹@ω @F!hYM?Zd\c"!2E2VgR#"m gP|w% DS:/P$uWS(Vn}K%{WHCf XEV\"F::K1=(VG#ҾPlyܺ,)آ,)؊,)q,5-JjfZW?Ib(gN$9SONH z\Kh:sjTEg}*z=ځV cOngY#۔!@tNri7<O?שvXQTQBՠYW$yD[v޾\H5_0ZEL *AJ(ᓌ>-m赗t7F\ݹ7]F)mxpYnhQ v ^Xv{֗zeXBPYo8ShWcq#%: `YN0ىI|'D'q?tt{Trg >%5nY pvї3DflO.7gt~Gu)ddD$piQ\2sEW?~Sѥ==%hCZf1Eo qg^U{ާF15e;FPj1I'%izR5w~9՜rO6P>´9_5&-@Ȑ^m? si6]850]u|>{|<ƣ^ca^W"voZW׸}Hoٝs;/^>)j WĮcoVVYkҼ/-QFև} ې7l*w_P+Ji:ح(R֏R^ZǿUuدeBc@u4CT9lJ4iϟK-\\ob .Gwgțǒ|W-d%_W+&r\k_~IULc}6lD^L쎚WG:_]:q]I$.JQw0''';iɬm1A|rG{t뎸dNt=pfn]\Ԩ폙V{*X6( j5{LZK}(drCbR zKP=8'[YȚ:8"gTR`t$XܳSby۠e,DZ#f) \JWV`ȇu{[q_pgkKu#`s'­U/5Cd2ϔQ\Yj :((Hܸ#>KL&}Wmڏjm/W6wToOPEǫ@k|,X.%F$R"GD8B6Cb/ѝ%_˕'3|UL2F;.}" 3 d .HB$^*#I棫](,Y0deP98hAT K<9*`@m"oE+mq:LVk.kd\ Ah(3&`$>yJ}ph$ho_ϲ*VRQ04Y:MDv4HRrQB!]ԭ = 0r3w2EYy,X ,u0/\Z! ό"v'Pjm7*.SӴY {(iK=).Lnqث"4VJ|(Rkrľ'kCaWl0H2د5)Od6*&vNΡdɰ&t;))4UVUs+12LKmi&r谉ELi+W_nt^F4wTy;5{^.*.\)`~±ֺ?ФAmP3'hHkA~ueLb_Mf]f7;clљci0Q;^%'Q5:t6mbriP˲,mffP-mbJO4e.~2'zrx0]58+*V\꒱۴:)i.#aE'PM:UQW|W'o]{(5bC5i6oWapzDow}]_ယ8xͻ₩o4SY),я¯ghw?ߴijo޴DXimlnW0吗ÍvX1;= )>}=&mYYhsS.v6M! 7o>B&1?nl&e􊾩 =cBP엃B, T&΂͹]l-=#_, ;Ey#l1!Be)9$-RC[o덤6u^݈mmyU 4R|!7HJ-WgOlbu:9٪h8tm=hg8_tm%~\/=OE&_ZA/ Cl5xak$PGR\+ߢR|+J{GGVPĂJ$𜳐Kk=s(<)Ʉ0X" ]__ V1"xRg{:g8q`yZۯbv 10f>Yѭ-PɘO SJ3ĝe9kL@7@Ֆ'(mA3!t@U)H=I"2jgB 1I-,KA E,*84Qٗ"~~J1W7]2&0Lʐ}K1A&NjL˦T@2,Qh!ɮ4:7xwŤܢ }8i:Wm*TvKo]&mWYخ3zƒ2--@)cY\sKIx~.}3]y&2Mŀ.O0g ; .>÷-v;^ZuoISgIYi)ЖYG_„1f5 |5BL FxH3H&[ڊmOn'y1-_@,Բn.ƵW^j>lڇ70V-p!>&ݣw?0#=kZQf 2ݝVcu~rܭ8(q30_o_Eٽ]Twriskpֳ4R@hIOz`6u+і(P,`8 & >fF\hm9-NmqkmfChV5{s{S=O??ah0d9AlA 9 -g7 ,1TK;G\[#U0Y:}>cműW Ԏe˶<fnZsa'0!Nvӏ4O7]`p$>YDD^kbT5قۂ_~;^M-־FƖTLʉm#vD1)umDSqq3Xs(Q]m!y򗷎p%T4 @(EJTJh\QLM+mScQt>ƻd(-Z_ͪw97*!t%}7@%r9R.!c.bӵJLTMjwք@Լ1PUȧ΍oacsluas>|(n.\VT |b}ϓݳ7߯7p;4_bXڨqFd'/oĐ._m̃'6o~'A!z lG 7َOO %PG9;Uon7N{*ER{1GWV$j'7s:vv#̮ Qͨ Q͢6*Df-MUW JmZSY >jeozߗKq B)  6 sh#kX9eYnYeϩ">7Ӌ>Z' R1ر8F ̖}lت+P(b9d *tu'PC*-r:A*b$^kD+J 8 |j&BҲX噿a7=9׷+J \Z=谙/7.>cjo؟siwor*@LAhEߺ3ȋ{61٤WEe.-ё9Wf2ގ(WZ\OI\K6̗`1VqR=c7qv{~J7_M3i_>HԮo;J&{_Xu7)^]|\[@GL$@F3)cJK tZM6d*mh`/fmtsf1 RUԛz8=6\Gs.^vmgm67LN#k:$G8JIT^d` 9 3=OV|LLPDE@/&.c*r6XcwW9ƹxn1@Q9J93d4#UmWmUmtǎT},VlUl;{*[G`Bȉ RШdА|G[TĹ*;ԮNG%)@Y(ƪB4@7q^U׏9h4>~]\~[ ^].hR=YKSG o__.糧KBZBE-_Z0FJbѱGT6Xk]-8cp""Dq%J)R pkm3Q&{˷8-87mPp[592Km#vD=BCɗӵ˴|ȓ>OiW:uPdYJ7hbI#*+WgO΁Eb3nD-?%!;E䄩2\t8m{٠vKP2j1;G/'z|َHDhrqW>OC8Xs&NL9Os'NSfaF9zJ9tdS[ Q5fA&X]#e6U*ܷy0`.y=5J_}j՚-~\fw,~ t2Bnmy>K$(M_c /VL?D Jx/.dI^];{;{r;;}8g{꒷Ղr$g޷{u+?:s>4n>y^OsYgl|t XR˿?u;._$'^!OͽWF 2=xEv=z1mg k DS%З?ՏvE/hP{l<5 }Ur,Js/B:Z `j-&jn&-V]҆|hOC雽r-9AҘv4kF#ldʍ($9HI\gALJJ{@gȲ-D&gmw^\]Ŵ^Owmo4ʯBd Qlju&f86Cc90xxSɲ}mVȨU) ,PSX<Ryr6!AH;qCP5䈠dy5bvQx R$!l,YC^Ac|QL?9+>jm\ Z7.Ҧ^-va}g ~k(b1W*8rZfVP҄Ǝwo]9NƒC,V٠)$HQޙ%8dPsP7 N7qD Ad)-1gTU,`\u5,AE6Q%"+чӮ!lʇ?PtB7E)KVTT8՘|aͤ}MU~i+kKfa; h^RlW+Ř?Ӽl̽s1jH,fh2uʹ y?іh(3*&bmx،llQRkk[{ ܒ ӤQ9qҁuaqӁNi"䜁 6YZ{4y+ \Ҁ.Ҁ>*^}VчbPSf A0]Bڥx#7Z0hIS$ r,13* .5%wbe"$#Dkb0sܦUcEL(3WWM_5d%E]FZEjWLi]I$/N4WA>+rLr cy|'r\3qDm)h%&P2PV ,P"#I 2zˌn rO随V:H4Ic'.'vhl.?o{ϗ菩W7~zs ?27.J AY > )@ 1-KCMǣQVp"T`G8Ghx81 "54V7FL9*!Z11*CYRuP]xƥї26!dbFjdt1NugQ<'`fo>n(鷥DbyT7r/r^%jC墮moS^\\淟Ǝhɸ*IBBF +  >B(XˊQx8EF3-KmMۏM\c[F#VZ ur(i * &X*xB4H+숀=T8d$p/8-Pm " vpDc൱T8_~\4 &PѬtOJ(Nw4nQ̄jܵI(gF&I.~K{ICZ#0HUwtX>ӴߵUoz@iLM$e_ՃjH?SӨ_Tb .+B0K_nTO{ÃJ&̘>)jOQkwb||ݳP=ΔE4[N{A0˜*;T։▱/Ζva|cuKpYlRհ"i`HK\3^#dH|KQ1+oEM@Sy2={.0^)ƓFmE^<,\M 1P^iLHX"hMCvɅq^"f90MjzߋIjڔW΃݁Dg+(jLreӡYox3t)Ӝa| qdX;AG3/bJIt 1BVgexƆڳZmkozbز^jaZ ;~P`oysFe+,k=xGd萲 H5.#K9u qas2%O&;Lvᆴ\+ U+$(r*,LI5A82ME- cҮ]!5Nq,z̝vX-"N7zfd0 ^ز9WryL{>8j3wb^mB*ڴ؊ewpϞJ+]+05('W?LU--(bLJ-(NڀJKsargQzQnDŽ&~j>^Dnb@(:Ɣb'WcŚjL T)1(%](^{sǩzqْ2Ds*LZ Ʉ w80jcd&Po8\pP$8:Yptݫa!PaS C@4:&U`TF؛Jb,#Ftvc]"4 u-,YYaJTt\ T[)A!C1+(O|#5y?^\.xž #ZZ % ,p-z * Jy$.HΌ.#mrhϕ"FlhGng(0%цK ;M? "Al=|cUs. ?Qgo^JVw(0AZR pea6/F!>!z&^S [Y(:u ^uQNd} >V`$"6*D/`Qopaeso;V*T!W@ZX.*,!W!9!_DviSʝ Qxe7>ťHբHohi3\*1 A&ܾ[t9~QTTO6`8# n$&O15yr1U[}mu㻫ysxV1M 1szn팓{?_b97nֲ'Ζҟtŗv3XޅeW f0bx>;ѪյJ^ǃ\뒱4:La#cKPcE>YPka^}psXwo_}~?7NO߽v @f)&mHmA8yik]C㮩bkt-dn&~ myI?JَQzR|{=ݫN~ҁt׬B: 2=v –]?VeEt!D,WJxnI$[c=q$"+ M"DךHi@$Zy\k Fc{2.v}CVuv*;5avff 7mvv>t աK7BU9~I◀TPAZ_Z1BNٶKf. aBSC_h.sxBN[ =esq  ujmQj>+ qe?a_r#$w²+Y|" c] 4؂JP*.xɸF1 dV@F;w.s{ t#ZW@'4JnMU@[^6F|Radl4WΠ P\6)C'"~ 7g]ǿ>t֫S *"AzNaJ4`cT31ڪQk]^}0[emr1j%ۮOe|i&bW<ګS_[3|wa*:pp\oGN/}`,^ vJ돯Vv;M0/*@YꥰPYGϑ31Sc{IB 6dT-Ц鼭m}ZQOdZ^DAL*0.d0aK}QP?lZw0~QE]%#[=_~l>\yEr-f<`X\&5" ~>vRymr*fd쬿rqR̫ R.xFzC#2q,Icrp^هX,vB$gU&MBͯ?* aquu*룽uFWx  ?[MWy< w=S9Z3*ar?{i7o~0uk32pyF8ͦTIگ.Ä `r,|f{QdO[Ts9]No WEMI" ڼ*;V˟{zaƽ,x`7x-Yc3?gV[N|?c+Y 17˷m—?e/=ˀ04f C3`h ̀ fм3`h ̀04f C3`h ̀04f C3`h ̀04fМ K%x xM04f C3`h *f{*&hl3 g04+04f C3`h ̀04f VbƝȌI_]{b[\ApcmɌaڢhbl> G'0cutBvr 1N%(R8R,kc#f+(4 RhiC0Bic[$rq{`Kh^{ޅmҟ}㙭KdX11/UR0Pg#Ʉ!M@UhaDQAsglrD`p'95rzWǃ~HO=['+l%6Eed+ed+~>\lP_Xf8a[,Z~|`[pTHC+R`D$ڦnnu%Tk;&m"zdzrO<`ހꡭD)qA`&AhHNq>ld⠿[ѱtSǧV<,?!6xl3tH`aiQVc #Q0V-lOBGCQPӪiòuRfmycsxwТf4UxВX&RB-N_fwcƘ<}1wزPKem[Ƣc\g8k10 .fdrH8IDžV m  י{c/]Xg/-`[vwrm(j&&VQR1qV{tEU:j$~v_9L"Iy,rWRކ"7BGŧ+F.m`Ob/o'$7b{-vH/BXZDd:9rٰƸFїIT%dVz%=sW/;4ffu5v=Ϋ.>ת閮im!ѢyŕZj_ +mDyV^( ޹$DCIh'B9 kaιG) yE 9A*+xae&+:o;cƌ% 佖'dwhnD!l5C9W] ӛAյ !`1fXR1e[nTq|!Fuʨ!Iu8 jxT8!,0$S)2F݌RDP94 J  /n 갵u!Vơຂ$^ [n&v59\~ͽ;pkmdGEȧ$HQ$ .vAH>f Ezu)$[~a e}/^OfW O??#MQy&'(HW!,\VdxL@I' }}3Ԕɀh]!*1mhzr„QQQȄΙdAJ0HFfFv\6C׈;ͿTfa{a [}c畞]NM/g_KI H)I7/(M%DJ!'dAɚmf-bk5`/muJ ںd!Z W ( EmI䰵b+sv#vs(;Em͈#Z&YU,X@6[L]^odCEx"4*2R4b )!3C EHVe2q̵!Qρ؍9W9gcDDar< ygeT$͈zH!Gy=NXgZbT8)&̣Q:yDr$$G_ud\u5f^r(.ƸhG\qqPwE"=#k'PJ# @*U(sSqPw1axx;|_yϑ;WbH4$14ُͳ1[fݾA L0Cg:v B=I(c&ʁW(8Ⱦ#&JrlTmp!14(GRd e/]PtN4xP |eFeXݵIuׇh:*捗`h )\2 ie2yo7B8BA6DD 3H6 9kƚQa O&9['63ُ4b`!a->{-E*!R٬ q (1R\"7) Cպ cU`:4΂OUfG@B6z/t 9a%)i "EfM_=.qU`{t>ك|GJuSoIs㯚|rknh~o磖KA#q)DKVj+U>ޒmgLM]'b(v9h`ϔ'V7G=bGPxwCnn8}{9ݎ5Y (@.Zb&5([VP9I$0"w7Eebh.}i{C|6LS{Ռ[3?&1jAC +k52IS СĪBS@j#ə=!Cmv aN8QG.mdsQ$L ]T dses~9MIDB&"3)%h3_ kI80dm$Rpکhdx#xHB92F2)x9{ș~պa74T}gN{yt,ţF0RF^*GvSO)ȐF ")VE("h[ $b0u!!&%MncaLKHRUD—GB1$y6fQjc,< L,$)82-:'^Q8+[QWXKU,ˁ'Z3zQ{q9x7)t4͛~% J.]-hhtalSt~@  L]+R>T-=cVII@#Ju?(LDQJۖ)AybkQfIeo >0{l`Q"եuV챉!98p)df?<?f.z7Wf 1&޶HiC^'1#_mW՞|U9O 0`V7ڒ)Ʈ:Ђ: t Q`F2k((-|)_Q:nvI dH5vj$=Q#)6=3Q SHQ*ux7cϼC"`gw6VGs"N[riQ牥!_f֧|FFeVZ7,iHiH5JCeDR_d_R~#h\*աq3`SD (YE[|j'YMXj}xkhu x s[᳦վH6jJJG89/'!B^ b }n{ͧ_bz1R[u9kLUN yGB:1,Dh6HzD B4#0zhJYb8o(e@2T!0DQrP^4h"HRT^zrN+ÃS3 ]Ȕ*dHWG2k8$BdWuL>}Cv|o=vprΛ-jlykW^ 2@¬%hR>i &1Ȩ9R%&g1k!Ȓ _|ݨ`}?XR cv_т8T2˨֑emF6K-@e]˲G8K(#PP$IDNԔLwKnp`|ӹZt%2)sM(\*rxb I˨,F${[zJf,X1YzsREW-&k)'˜Hӏ4w)CwMKѓ/ĜEJ29 _M&aeӻiC.†>>;s0oQ}?Lfyͯ'ɿ~|MPMr"zN~Yj@2eʭ!}dX֮?1R>-~ziVꃄ^8yøUJUg'/>72Olf3wR^RHD>ًH ӾgͶ~|3!|s1/[ˡ ?H._6 'w#ii> ? Tv֒y_}w|W(hٶ_)aޔ.ǚvʹzk0||Kf堹hm?| WrŇ>ͯxv˶-0}4z-.q}izk^EmV ];{8K??qe*o]O:|Mڣϴ]~#:Z/zbq}SYS?L>յ75~~έ{unQڣԣ/xOsSpII$\r5/h$'Ltm&YO8nzu컺[oCZޫ+MoMݗˣI:Qa5+W,.٤eLW-=jumhl& u t~Y/|ŋ}zڊl5~ͭb9voxyPQj'ZOR^2ǘ6cԄmTJ!h|`?.7 ,zh9}j ހJˆOVgE@+rQ%A+RrL?~eƃuYY'I?/Y?4e1/Eok <}*x\;o^qޙk/E JUCiү?~F 9wd{'K'_˞UuE J0wfp>VU}IQ";^).ݢg78ݙeD(jD!{C ĝwgGb].I{2`0 eGg-CDlZRl6ةS'\z]\=~>)6J{%k~::0o3͝tz%fznr9k?uԵS~O]yF8O(Ϭ?@b*Ϭ:TL4 {[jƞg>-W$pheH);\2J@r*H&% ȶy+^DҞd tt(;SɻУNcRdTi qɩ1.RjgPm~.1V߾9Ou* N8<&ys'u]?_M>Mm4ξlڼɳǬbu7^|Qj{D8AR- w-+:m2vHHJV9Zs`*cA4`n@& He;`z\bC?؉_@'BI`Eڈ,#B)Eʹ .F#p!BTzWU7ҝ HKw?{֛yX.1 b֠fJ8÷ "dAy mڧ[QىR dUR]z疟zb@=IJF:T{2oc)+*ccœ*zНJ>w~j=2ՓN߿<ِ%$*9bmTW0jĉ%I9$HzYg6r6VlxsuqVa+pgt>TwCߜrW0OH B<j.QZEJcF3(Ʈn[6UI.†PF0}rj%{>)71V/}[.\IgyMu _YAV^RŁ+}VKaPW)>x*W:I]ψmO\)vXZHBڑ=r# `#gGFŻ0KqPL9M !@I*QQdS.(eQ ft!=95E_QtwkxjH:$Q ٢aTx|"iث[I4Qٺ tGjH۰hJ`JrTt0 EDƒ;bB]6'̿a5׳H7汹S%B?ɒQ]5Qx1=OX[9`7]S>8k$qN:9&%9,U\!+R$|?H7Kiji~V{@Ժ{;k8'Rqf0N87,;r yQU <{eL6 Ky$tr?]TF5<=ɈYvOG>4i4^O'DOLj|szczFa{qOqxWoM=2[q#yǗd$7 d¿%a|ۏ'/ZfWqfA=s?Og])LGӟ?^=okIږ u-kFofV\uGc|'8hzpzgmtsNmU[=uWF`ꬮ3yq09*^L˴7ץrc(o1bۓ=ǿ߿߾׏_&޾Wo_-8'p1+u&Xsgy@ ?^yӪYYӊpmy׋~{RHGR珟gn8=#G9'uOۋ?a$qq+7USq21|S Ιq}0fr5\6֣s$y~HaTIFYD&R;'uQ* YbQLLH]l8 y=bymYss4*St> 3>PRH%:8TlN5bm6&dl˖ [NlwY7︼tu5|ݕYeD%#iC,Fᆠ2DfBR{5@AeH1ym!aJAV%L}XT"Dh8P%u;|q_8WSn^żp][W1sbN0?8ϩ8HRkTEd< I6fb$h#4ڜ}Ά'A ) V3M%/;iW aƐK\dn[cڿ[! 'd8 ˹(" 8&t`0H'["r)"m[:Yw3a(.f]HO){!f $V*ج;j3snw<  Kqnǧ ?J8yxӓq%~\y{ns1y2[ ӵU{(7;k84TxoFQZGG'WY\3+KȲ'ɏj)J9iM#![R*!Z6D(zrE6d9Q=cs2` ֑9GvXY,l62;QXxpiQ89o4;œ?>?GlacjÔ@@BihJ!b9B% Nɨ6b+R1ds*d h02"et\DԚﺕ9Gp4M+)DfcۨGmݣvG7xQbD.ɐɆb!/]TҠӤNE8S$m@Ñi<,XiBh6rIYD#h8zFN"rR]Y!fǤr]G=">(& SrL䬁 } 8PAD`FmMcz`dցמOC5x8ԥ&a6{)0m7;,Voʸ ]aLx~dxڃ2{2c,E Jg 4Eс$1XDQϭK)G s<Oɻ2BQRRG<;gdPs_?>>::)n=Y7% s}^3G, )@U5̓bsa\ýGB .W?=42䢑uH0{5J OW£anmS~;kJ/tIF]0V+Q٦& ) 3r6qqQVmO>_ZG"^n'{- z>4ʟE,l"{!d,QӓqKUJ]1hmw&λwPZH8GN/Y7\@2۞>Z[q^R.xֺ=|FҴŽh# 7摤iK®-7K˖g!M[ 9 !@j(3!36)Y+ #(0dLu=Q\b4/`$WO@3qAh4Gsz]9_|8:؏;*|E J:"T )eu 8`TqW\=43q AvS Ec@)f!<)Fz2Pr s0>{ C665WAWH R@Ǫd ik ]mo9+B>m ` -XL~Vnv 4.*V=E\]Sdm2,DCeU)L;5f܅RD$f!VPZ,'G^㮬y\u+%ϰ־Nl@[6;8)d2d,i\E+T!UlU ̄XNXcp^KCeTq!J%bJ=3l! J1~_.X ^e @;(wFdhf~c;cIqD JJe?Ǖ^ReQm2QqJ\ɔ;o.RP0e1LO׸?kE|HK" -9%sYo 2hCL$"wQß7zjj냂wu PS}uQChmP !Ads|r&)r+OB]Mfy7q:Zbc~!}r16>Z :D^sz;}cN|X@[=Ȼ{²m7ۏxar&yϤ7FvZhͶ 3!-C'/Fkv =) )mt`+b޻INDL 2ơL8<$'2r(Ebh18o3 RHF%S69kE'c}G6MŒczziRc>u wﺿb.IbocQ w{Ջ )cv+u}uj}{|#nNӬ;Ė\~{m;om::?ஏDmDs}]ZȾsmɞRJ9=Ra#t7X=ԞNp7jÖwSjxt-nMg&t[MyXĤ(F~ɲӯeϓQ|84OnsgKc\Ҏ$)59Zl hKZk&sƴ6Ng@bzq/;$ءzgVL^ Fu$ N@<)5hIbbɃAVIQz(,>]Wxگtӻ7szNd{JjR矉 ^qC)<'3Q<T{tQ_x[F.<UzSAҕtKӥҥa@ a P"$tX6"6 &eqM)׮72p6^ *TG#AE C#-Xi#1HzMKh~dϾcq>81K\#xȠfҺ b ^x_*uV KׂVbʪ@Vi\$̪p+wd"lkf٭a%CȔ~1=u=n0:b"PtBK5Q@@<`\tJhQ"gŗoxkD'c QV[|PE~\U- wiԒBzaЁʔ1@H$bh$]fE'?8V3!:W+╓I~G\nn\i}^W_ӾD/ezQ@tw_軺?~!)cŤ0# fIXy?8wP=hd a7*.ŵhqaM\,?\ĉk (UG'wHtBytl&j3ؿYtY'tPn2~Ȏs흺_FwV"uQ~#oןj-tu7_UT wjCZ*_q#oaTxh֣UZÏ+Y_˛w٤Փ`Od|}zlsd4M㲮㍝n$N"jiI@H C0}-Z*q:>Un A/c?nQ9>&FmWDͨŏ5RgX<5*,?{Fd/æU|,f`;3>cMl#ɓ8[G~Hm[u؊C갊,F^|4mg+/PRE2[< Cǿ_޿~Û/߿_߼/煮/oo _Ajkgf _~{YTުh0x\*r%Kz̎z OP"YV1kGs=j6k?,Iq{&5?nr4d7uq 55pk 1q`^2!E/lū`nM1#$,蝏"̼50&V !1њX.LȖqcؓ +ʫjk;GE!rz%R"I'" #h>{,:G䰦ɚNw&s6(o8m;s%bj۹!l.lGHDa;q2obˋevT~>]$e)X9b^ʐPDa E0;a2yABFFKŒsk=sFxkXd H6M9 'M ]]bo<熓`UɆ៿!<4XDŽ_6>LLce]U'Gn*6jE}G`'fReJ+5;e9+`>*w<^ǍZ Ѝto(Yһ:3"|Ȩ) Jshb^ǒbNTw[D/e׹ cϣGLGy Q:@@ԉN!E>$hvu@GpOW)=keޣ;UݽN%l|~>*vk}wූwd0e*L ΕLhzzWmnRV>593.s -wtz%;`Adm<HHvO*FU8q6ݭEtU6j^n|ؖ\|")S`Y0ސb*Yr[FRKÓ*(YAE*ec۠(z%bFeYzVx9`{M8Z͚\};7OhԆ?C1gSk"?xRs@iЁ#sB#\a`* )|@u2~Tͽ;|i8Nl)n~W|7]c͏[wf\K &!:2LEFt:g2Dq%Z!VC;/}]zjzӵѧӣXOi }:Lт+C'b#_F. ѳW7ۺ&'O}ewxE^j ;7o_ka*sE1[`եݜ['_YFTnvѶ?zOeYam-"/&:On{tIv4?{֜Yn;捛r)Ox:; ~|$A=xۡaѮPsd^BVJQ2azVS଍w4ݳKۈ" o _bѵ4tyekfl?6n+#ܰJ]Uw:j^~sMV:jY&|ѫwoGisܨDIޫ_4Φ52g<.',wwͯF8QBiytqȞ,+qJy6a'{΅ e;*jɭF̅CYd@'+ˁZ fLfC2jHKIQoM)zcp[X6Hm'#w; J1TFޥKv=N(5 R@grJ1z鵵`d.)/YrfI̘VQ4 ېd6@%/br"ГhANךбvvEMUȋNOʲٮHљ=Kd1;ύj9I'L.j'5|:|%-˩__K||{L mM+t-%Fk3Bt!DLS:eFet1{2&f'TKl.ES)*q.e"383cwJg\ؙdlʅ΃b5Ri[rGaq4ooM??#36!p5b@NfJHf9oHRg#lԼ`@Fj9* mAWXrFɷcf6U9̂6]O'񂋹+ݙtlڲc֖kM<@ʊ%&>ˬ(2! @($cf6FrQStʇY*7cI G &%0A!edìc> |X;Bˆ#fD2/Rh^"ۜ 0p*Õ !/OMM1|$JQ3>`06r 4Zk&5Dg8ޣsYTxȁTi/vˇIǦ|h;C>®lHy$; 20:fApCksf@cя.;.wӯ}tx:f3sAwr'5U&RSޑ{CAGh:]Ddb `<2WyIIeG-H6sd\!ЍIySrX339y2uV3]BwbN=w:{4?|>~qM&'ǵBgiأm*ӛm} 0)w}pA x =+l@ \BWC^$Gk=+X#"7tUJ*hZ+qj -jzDWe+k ]hv JC &.FT. /Eg'F.tM \=]kǡ+Bb|TGlGs7~y,P1$jv`8F~M'5t|1딕)|T*)*,VQ~MT joS1qҢUBTf`n8-fTYTQ_OSU^כ)gP؝y>T{-qID([uGMg6jɃUIT,VVK N&+ɭ: |R= ^UPU*bުgzv 3A9 d+ς4{m՝Н](FͺeNXTc00e-,'W jUf 7.v+L_\V]Pj5{b@.4#X7 NW3SB {DW ]\*haC J9%]).@NV#$;K*g}+\+.p GvMJBC^r&}2) `zCW5LoOW]!]DWԽWUAT{jʲr^`68䷰ެ/=5=]8l cHsJ#ס:u&"eO㣣Rcb$ ]{+BiK)wtZν[f sGG% ʅ'Ea>I&K9Kt:vٻ8o+W4ŋUYdf,TR)jwHK; 3!%*_*X )%$ڨf)@_"$U*֍KA_ZhI5:I{m͋j#r}HF 9yR 37g ,(R>` :`ўJb=w|֑]Zy;a:ir@,W(P!=0N\,ܢ SF3PQCm>+h-aa0P߸+"Hmbu2Bsk(|-b"$l ]K ݑixu dFCVj;xXUAl:VlzdF)MfC *TD@5w˽AAQU6MsA)%x X5lGD&0VN ddR!}EБA6>;I%G|!LUTZDJg=l4Z26HQc9MF29VԠ uszR uW4G e2a;o]0K@ JAU.1a RLA9 N0t掊W0:Z'8TR >|DU: <l 34~ :jojLEw%RI9 XT5@,NRl ej84vՕ/H!xj{QQRA}k_bKR[ Bt=(%5K͐Ȳ(n i${jE}∽}ABu&x>-_=}łTM%^1,4WU)*K;L'Q_r1`x=u&-/kn+>TUz쨻 L0B6#&f3x:paP\J_6#JhIWo#f BX9Y4<#yp (hQ( {By[ɐ$RQd"5B5 C0a0Ft/1 JWLd:[nmG⭐C@8/} ,PFu5wNu2<&[WB(NCkdkѡy'kWمX#W`>^b= eDA!Ayh"! 5y]%@_!80f;@]Ii(ʠv?a^4Kq[S-Ah[Ry !zbA jhjXb[T30be@0 9=,`E^1H"^S:Ԇ&@u#]䍡"UA  PeQERPGq1*11<RuI` tj$hci0Ih j hN\76xk+fnQtXChփ*H]6 |t3Ag2vT VPXG%}gˮAц?j7לAPK}4 ŢW}C D (BZt(AdJ wbdzlx_HO 4%鰀U2'EkO^ (T| .nI2[1pQitlp1:hK2h\CC,Z*cQ4k&zoBER!6.Xv餀k5c4&Tsxw"BPQ>J+e?2 ՠLx<a}8([E-ʳPǠp7gնMߖ\ۖrOy'|6AR<-4v|t{IlZi 0 zǼ~swڄ ?2A4˟|wZ'_?۫_J_B/o][o73$L衇["TF_* jk!8@kY3ZRe D E8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@rLN py@ky Ѻ?(xNQH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 tN \N d8qI8X>;ғ8'{](kW'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q}eN LWC('wS t\O:q2u9N 'Nv $Nrt-3r}> z3 t8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'Z@n9'cv|h}0| 5Vhw=w7/0ɸ۽Ǹ1.mR7.:-ƥs0.׼K2uab!e"3+cU q"2Vc+K~b_:yeuute-j+7]pRӨ+FkWOWYoL00ϥ; U7]1NW@I* ]!]9LJ4]֯~QF'tutT2fLsXCy~{Mۇ^=n̚-~r^*m~˟y{{ y3zFKC*^@%܇v{2JʹfǗR4pY"Q3 MM+芽z5;kYcgs+p=ij&2/2 ]]%wLDW 8sތiU%\vb{Pjvz\tu`.KtuZPƕ7GЕܡמoϛ+<]1f+ NWrБЕ֩n T}soZuon_{@_ݴ tԃuc*o>Xyw ~m[ulzGoy `m~z Wooh} 4^>gg~[ɻ*ni-mϖb/%ڦyI@dp@oh?ݣO?cڒ/i =csh}Mpo\J ۄ)KۇJ.ml?_B&d" L\O7d4Z[ߦƖ_ɣUC^4n̕}~ƌ|e7:ۮil*dWvcL =hj `TْUmmd?w GMSb3\3͊ կ2J>+,-EW[CiV`: >uDufb>ڼBWBW.*-]n y%K.L$AK1F<M]f}f1-KBgH>2v4 ]1\Ojt(HW!⏞7y`b1*S|93+~Kͤغ8 ]18M1V_ 2J#;HW)5]py֮74 ]TZ=]!H|=t9B#z>`2~O(? ]үm*AWAs(0i: ǡJ *-MDW 8CW 7N6)vb ]!]V{K ~Z\RիЕ6h;]9iqb~Q$tutڻ]jbNBW6JU2j5]1`OÍfZzt(_]g}PV)szuu` 7}k8DCiW#(tC )LDW+,thЕ.pMvڤQ(tut孱zaj;wk3%:]{y,>] hP<`235Gfݺ߬;j8P\aGlP2Qg|Oڨ`srQpw،6@IJI}%66NDW 8i&o`0J+tut_g_?]ptl`04MڬqBWHW߬=)Q?piE+cT<@}~=D4 Aϣ*FU2/}iQz9s4Vzc1 s8 ]֫?t(:C.y=ӚjbABW6Peΐs|MzڠVC_]g}Pޅtz: pt>q\b8Dw2]#* ]}k/lj ɗڏ{CKat(:CbYyÍiڤQjQWHW6>RbOCW `9bf {M|=]08 ]1\=ػ6n, PPetM ~BW#q$˲FI5O H>{$vJZ2+.)^ݦe#ca:x 8qYʭPLtҖtҲgO\viE;peh=LZJ+y*),"Ufm+"\))ҊhRXugeYbH/1$-9 •D.+PXsJ Jڪ]%-k+kzHsO~`a tbvu'^=P[zԒ|3*mW4=֊*!ÝO=P[z-9=\=AJWhԑ?+b/U &i@?z5m]0Qas2wAv鑵+׶ֲt-oޯtPr¼;.vR j+u]줥&]l(>&;peLڪ \qT ≹UҒ"\I~C.h4KC$J5?&r ViPX2NrNJvNZ*iDCp3pU+V$'W q*)tg*uW UNpҒ"\i+PXaJe_bHZ~+g!"º\)ݿM;Ib.{2J>[v2UU47שA/mry94%)ֽag25T,yWxkFGoV_(I߬k݌U^vх9+[ ynuaزnWr>Sdf?+ۨ޼.u__W6m3@]SgSw9fC4N87Q1F: lp6HF/g)Oc.*5X@UvUR߯YH-w]qݳi &Tq 0j2ښoZ^M|9~ؐMinC!] iL ~e \׫<Ѧ|z7 twpY ;[ϫ2Q9-UO|E3H\ܯ=*K02B"]] gEp0g{ ]0-K>/r*7ܷQ(oJAx2ͤ"!ԁGbuf2I&cV6\OoLvR 6ɾQ Ae1:;DGZ2z;;$h>)֥]8ޮ%e8ңG9r(J FV(aYR%oLv"5 jĀQO4kkMS*xq:;_Γ3z 0ԖfXcE!tO>@ߎHfA}TW(HfpBtT9aFr $ TT3y JqTQ( "2w9lVJH_@4Z^sbu6BB[A(BJ'fwܨzpl7FQH; >(2/C06n fǷg?m3=J`Ʉ=9 X<^烵1Z`Fxu# =K$S ǒDn=&=Bz^ͦ6X95NSJo{s5eI g 痝nF(Z4S1HB* fuxf47Q*#E #VVXXa忄K x8#B&R/5eDDL4rFQ4`ʘt1u翥ͬqm9(3r^n']i DjjW֤]!"96JR3V@VKJXJ6nt-?G4 B -KqQ ph,N,ZƜwA0I(53E1ƈF,Rg2DOL2E4^JJ8tN w6Yh6驲yrm7gۥ4_]u2mVb-&z`B5ɯa6P|Ü 3vT|`bt Zlye LgsߓZkC:"g2:8<Fi⽈a,tC$CRcH$n$M Z#G{=FPN3/.耬zŕ"Z{Gm{\X;}MgO-j cJDq!al/5/=/} 1L[^#E+-Ԣ!E Q#`nNwoۋ`)sIk.{2܋K"n6&Y^m~.l<$,qd|q>َ ZzܼWG<" :BbLTQHNh/2,El;NI&89cp59J/=(Aj'b^Ȱa:E88JRrm5142^l5&'fD1@l=+ƙy!ej<`ȁe $SBҬҴuҬspNKPRJ""ZˌfJSːw06IC*I~h8ō8`1wa2ha[aaۘ:l/qM=;/.Tm<'1_} Ů)?XNf8m&t7iO`cILWOWiC'5F9>iKDOZ2pi5^/yAAec+Ot`2؂XRbd$]Iv{G=P6¤-葺O4fGW3rST 3X'ӘRs,\XS*e4ŢD5{Q8@LB1jv Rb] hR5a1uv㢗5rOAg%r &D" Sp:#:FfS7 ß$C|P :<U'QzH5N4tS#( J3K@GҥZV(wh, *9!kg'?BW*i4{JdCmФɡ_8JMo'16wvM{~G X}`  E\))}Gj#+E P:> F.¦C]/I;{k{uW}({ڧI 5>˗sYtb2Q'2}4J4ɒ7_Zf!ˤ6舠!t(?gj<E]q)T.D6 4 %~|"ꆦ(…90L`n0!W@ZX.*Ԩ\o"}^x,(2 wij|pSN} ٥)ALnm!Dnkh9gHMS^ {tEtTT6 q0{G8A&eʴHUJ|-~Xxv5?_P`vY]|W;e!r)+0@*"-:e)B|~\ Mc q6V% 1CG%lW37<s˛^_?{=dߞ3q50 ~XBug( d?5m,ko5Ullԋ <_hE|9ҙ\v Je?_}zDžKo[l2\}5jVK!ؚø Z0dW"6"ZAK᤻%(.-@or' @Cq_8MebG"_GHH3GzHDV(E5ҀHt)irQ#6Ɇ% GZf|C*a7ܦ$E&KrgDYOט{-x\EOxȚNbx#cx0k69}N7b{F-ex] W͛), ,Mj}| Hf˽MRcF 6Cْ1V (g MwBer<ëedD@>UQbʻ+4{ِQ;RIj)eWra-ЫXҬrhߺl#^}nƯ؁y Q7&IieH% M6$ٕ8CVg&a:-Xro꾁*Vo:Urav=֠n>R?mOZt[ 2 $'Fs& }JVn}nռ_fst/%YhQ4FIVb$H c|Vs@E #LM) MȹVH_71ᵙ\\^(d`Y@413,9D-#Ioe2䢕1m0C3:w4w_QYM0y=Mo6Γ,v'o8 0 ohF̧㴲l\W͢U6JW0tTN&q fmP$ :ݕul[A3dc>LڵlgTߺ~W\];!uoi׳)W()HSFkd:,O1ќΙ8Q2̱=7cawdi 3_o|>~\f[iTBۋ4]ӸdҚSn4z^^~ؽdz;i/?W֩qk)_Ƴ~{EwuIV턮yB;,ǛX*}+P- osV )u}W1~qqA-ܸm!Py>֨)g)NV ЋklM@h]QEudq=PJyTYƓUQ!aMHHLР}KI呡GnS18-(,F[]Jg ]CXMc6BvIu=h36:3*MD2RR32sbk%|%9ڔ3KS3cf(ŬmH*ϊO19xHi!DNbEo PyVgDMA3#Zf?/}'ޥqr=b(!hu[eN^[̷gj;HWݱ?S<79AspN&fƒ]N92][Ub #1E*2j49=+o䂠'Tl.E,՝Mk8W2Tmd&vdUaa5 +W1`Ap-QŒ;v[7W/p=w?ŀƍ7w{ef&θ4,#Js24c!h3,u[@{.+Rؔ ^CM&ێlsH3kt}Ajڱ/jʨ&<Ȕ5KL8/|VYjIc)4X*L8xЄU0+m\8ָ9MšX" c*U1ե)Pe<&v:iɾh*pqM](d!4 SFF+9,@9JkGp.iǾxh+!u52N5r?s06r !HяRYǫEk}UB Fw Y,o Y]zoAvmgdzٗ )ǫK,L\+^x1<\I]nyJ$UF>vt8xF}w/e }=ZHcOЇc 3C^?W)t.Vhww_g_%1ϖG~nvuՋZ-p~s}3{F?F͛EQ;ŬF/ӛZԒ1Y-ƿz~^.1/3UW?>:I30Ë Q˝r7%q);L{!j@Qb( OP%E/cseVL`P2jflA40}JNunW|IR-ϥ_v{2e{gЃ}q71;oC3`0rn9 ,4s˙ +i1:*{jkkw%U'399n*hhRjoVg ~g}x6^ތc'; _O X]18Ž#Gv4ջY^YW5jXʂ4gbg$`ed%y rx\̣4zDq[Pȧ\W,`oU;tCַ*1n\Rx-"FzW]CGKdNMh2f+5-D蜗[ˤF89\r3@пא  {yՍ)ȞՍxjoAwg3&;cJ~hz&ίs4Nyɍu JFr1dx\;_q4:ȔDu@ZѰ3-Ĺ urJ乺Εb<2wD}2 S%5jQ~sCyvgĂu.llYū4o) lWNiJ~ϫ$gPz훷e~uqqؑb18;|BT~MhW yn>G+j\J@fH)^8R̍`ֺ03z 3:X޹i}rn뱬9rFкR恇S>>EՏUy[i45 n/dtMʊyK+BLh[>vtŻur5N:1EmOQ巟>dzun~sj-uo}@^lu$%ry7ˢjc45cl@D 1FK d:ub|( OPRy/֠Qed%eK1$aedzl֒%_7eΔI,GnS18-H\d5YiEHj ,У/-(uۖ,qn9r Q4\6> |r+#1oȅu{]itC/ͧ9o9zfo~ idC#Wv?ՙo?g7r>z[zswϻNMLɷxY@zCZ5Wk4ӧ][܍yPo]Sk lsi'ԗ&~2}H\O/SVe*R}`_&e;RFIRe{B f/L:Sѿ13ͦ׋&Gip 3e)Tpo[.PlɠtSAiԡ(]JE6P,\qOHZLHyTz;pB+ \éUUWo  l;*꓁"ԿV pՋwLi!YkGgk:.\=KZǁgIi\gjW5㌟\OOZL\ -h:!"5'WE\Nj W3o/mk_h&O?]w1w#GގW/~:q:Oiu+mW7<@lsXwueD4h\>3o_ݗ0w' {qdnomi)Z|?cȣA..r<$/~?+E^')J˕IԎ __ԂP-e3lvmRm,g%'/.5KV5bORƔ6W`/CQpWHWLXjG++Rat(㑮^']Izp&N@sFL)\+@}}p.iY YTDkiEˋiE)Hӯhִ,&on\Z2+E4#]!Y ++v֬[ ])ZY*>Ey\t**EK# 8j pY)].^])JwTWr ;SI,z~p݁]퇖C) zۃܑ^zxMtcX ])BW@kc?t &"VDWGlk+E˲tRᨮ^#]ļ,WCW ׭F])ZK+Eܑ^!]q lDW '7Њ3K+EGuJ7<֞tf԰!4Dc?noLtk2 8GUn"V-.t?qeEt++i-th1(t *kXֳCzRNW2#]BJ&q[Q˫9"h͠ G3+~f8Gb|{h3p# A AW|H$_0h?`~h,呮^%]YlӊJǸRi5 hOW#]F?8ʲ55(~&˾w?t%ݱO8<8['.{$_chov un>_wPVIgjTSl}?\.Aԫ7Ec; :_lnn/, w rQvhMey;xW4M"cы)0)G|Xw_~ n}֞Cn4'GHmWͻUmlrAs*j gO/ e8 e 2xj`~ C@r>t瓓 M?Wl[ͻlş35;g%N>-KbLTeWʎRAt{5vϾ_}~~zۻ|0 .goGw%qLIYmA&_$&P#>׋?-0LK%ruf7V\J9;6l#s}!v&: R|y_x8_[ЇiHs(@|{6S(b̙1!v-!ω6{Ghbzo+";Y 0ȍFy\NR|(OF}^$T.!R[·!nvHdWN҉s7fDj9{nŌ0k=!1T07T;F+PtL\Ө%gC ^ Od4!9I=w'61dj RPi0vڐ\˦!lCv-2*0A?gO@>R;1F<ߚ؇GRHY"_Y7ѫEC61\Vq騞:dyKR8Uh'!T{ϛHR0 "3fj.E3ɍD%[C5 O:j_,8'D2097O)S^QhhM?cE廌jk)6$-$Q[KFcqNoJ d}i"%x,&GAscIF5ΐ_.}p@uR" 9ftYg}BۃAzp&{$M2IOA[thwX{'4'ͧ1*LUir"'$_ } XydMxsۯ7[sx TpUT=*?u#D1PI>&gAuPi0%A0MJIWo#fBM]e êc,Ohv=!Lų. !hd5FUbw$v4 b4/30}fHHٹuN<o 1t<X0PtmOUԝ6ne2 w2}MOM_"d&KRj[P}6C@FD6=B.I@.@^s`X/wuUt !A`ʠvЋ> V mM!g8Z.fcT;6@NoBJ @Nem1ɬQ2 be@RODkPѝ2Бx}.LEHYIbe&S(- A5qwpD6x,=YeX -flk'jj@2Z?X=,&5j3Y7 rkEG%Czmm-ٗiqp?{7zs aڛ}ǹ^Irs`0u &C˞f=W{ Hi8cΦAE1j\k15ÜgB9@a[7#ENGCfidE݇DH'-*tyHcy.0Zr5wANv L@5tiY EPCzG \߼]OͰ(*`?}Ţx)1|rpO9F}GA˟u'o YV1 /9=Baڢ q,QjFR< C^:H9-0gǺ[# <4 - ޹-xYTAd(P5z̤l!&Sp HA|,?]Ɂg;BwTmؘ>@zM[pYvHi1` EHXk; Zf`-$o50zl?-Д nFII45`=]GCX2:!lAbni4 6Hgr\UowƤQ`XKEw,&fԤb#8qy`MjF4&Ty3R׳]q;n0BI>zc#cR/]݇qy-n[ ]և职d{p 'Y*feMɵmؠ7?|~8yk'}w N.kx rvrH ]^]^pr7Ow?ܞty~̧8;˛[|һ۫7oȆS:Ko3E^?{ƑlʀX޻"@vc+ØG5E*|([5|h$Y6GiO%, =Tu^-qE0W;QRtuk1,l2:u.z 㧖@w Zh)sP >RTCdHgYHuW NRQ )"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R}J UTJ YM#ʆM-gt/ECWj ]ZG+˹`誇tt:t(]!`Ϣ睧+D)h>ҕy]`.IQ]+Di57CWfOӛ!fV`;tǡv(}ǒAӂ }MυpEDW  ]!\b+D:OW0ҕ0]!`颡+\,th:]!JkHWFK0}1hxm|h׿W9A}(r{Y66C+_:a_B\{1+blKwz/>N Spd~|qt rWzrzCO 6^%/Y9Ϊ* /oovX\/Mw~oaz"] rJNېj:J%ON]Ŝ~d<1+[-Ǔz'Yd2}#pu'ߒ4ܪv?4b6 '(K麻BrBi+ߛwxpv 9} v۹½L*4׹O+ %rAh_}qn:[,d4g xҋ C잶ڹ)X*SyR8B(erӦj1ޮa9ڧe4=A?=|s2Hܝ5)+*o¥!ʅFFtD=i񴝝@⩀):`C:~!l>n>duPjg\_e#HAw#\p;g|=Qz0t?|XonvBlztA?&O]Xn>Knh}( 2Ul:\w(+KtzIcM hW@}@Ud?3g~FZ) %@?X<&/iWavL]m7:CMW|~l*{<`?1X`eɩf=*Ռu=XCR`)cXL%GCW1 ]!ZaNW1 LtK"+l'"\Mth<]!JNZo잦C3!3wJaPNLIaFD,*bi@k,P:K}i˼1%3&|4tp<]JUqa!\ ]A ` Pr)zHW'JW؉h hp]+D(viz?d[y`'=vgi֋#M1C:&-]\(/uDtŅnvuB̷C:OWR]p ţ+k+D{-y[B]Ŕ "`+D,ths]+DٵfDWBWJc0?h ~?+D)UJ+fh *P;U+DП]I6dq\Ϳ3vt]x44 (fD=i*"Vp=cռt(&>"zrZlt4tpmgD64MD&Bt)o$89A-#';v-L1CiyƮ$۟]oYDtōdRDCWWX i㜖('!] #-]>eeTSYU31:t#-[50Ouhg٦n,ϩ,;"<`$-+(- Khz()L0]`EWWX O`JhtHOWXx h&0bt(&!]ii[ ]!\@:]!JiHWƂ)P;&b;nj86먂Jl *45bU1}di|? ]!`g+X4A%ρDW=+mk]hY(l@k<{68'pbV.Nn]\Ueie,S%,u.W(\rUfulgJTF.E Kl]_H{㋿>iNWjS8#U*3(42:.N*+"/K|sQ\VWFп?ϱC>z6ح_}?fFXf]Mnyhz:x&~ t}mwPghoT^m˰C&?> + h4 Zq JO R/#olmuzāwfӴQ&Iz1ɦ!ʗtofu^{6QOOO dC2BѩYfֆ4+xYU,S.1 е~Cz}wrVNz {-OP`6;wRxׇWOmUqjyΛƟYקpPlYz ZCisnX>;G[.+4J*t6:9ѥ28gVizveQ(4Rʪb. .+ \S 90خ_WG|b9#EHpy2+.K5d)x ^Y&\PU*T Y%K)2'w%3`:Fm nd8Q<ϊ^6>9ߚy\Z{.POևtC믮Sw.l/kЗ< s]9 d9i'mvUV|]Y};?M'ZXA~@kT CӜ,B1Q=oTmab aP\'2/]\Y[ȠLW1(2 gr/Wer[(U[UISb%/QUO+$љ>ss 9 |j}꾁ΌSB@}rg?nn.>bSQwlXrwbȍraig) :UrOg\`'D=3gc^s-)i6Yg{ɬC9J6LM&}RWs`&wAJ8A",e[(y Ζˋ.v>VIopg3XKͥ f~ԼAKPfcX&C NIN*9dzqsP >R[xdCPboz<Օ7is45ͥlUd=v(䕍ND2yVB˯ҋOzۏrUvF:y󲔂YŘ L|r0i^d;UC(OkmH_?2RW?] 8ncC!UzX#TK NTUݓ_Ԯ>Do92, _MN*D`hJF*I-G  1dl@B =#ex8qkz:gp"E:J&E䒰dr) k_|Z?E;_bu,V;W=BV.g.8---6f4Q1i)!pXz^u5 |hI',_GG6IaPf)SVɥDqv@lT1I?0Q\0v,ۓ&l:$Ql>rGyH@10Q1~pN&H7]$idչYr*hJjL`Stt-0a bn^ߧ. Wt18si7rZ)ttt^=Ԁ* /(ph?z4!Pvh9}%Zkϴz迭o'c׵Ɍ݇$'˵rmG"''Nd4yIk$%;G]aPusX4Vbŧzc]gE稜=!FxVBLlG~ZHXu:k \`L󠈫C8tim7;n&p<=óS"ǿ7߿oï1 q^y-8XJk Z/,8۫?64Z|ՂowU/#wG*qIh2o?vC]Fi(]4a\E\O"۫дǛͰE;ޜK9Ʊ!|#GCxiķ i]n/UN:$?'Y~:27&!\YU"1Cb(ňC :S"i}4=A)nORn()Ҍ`.%O<%GTٓf0XzN<-Hmmvj4qme+V`OڜwPC3CP[`U:҂8VjP%$䜹 !Yd҉0ᦼ5Ӏ v Ke F6KYf-I,K:ZȜ0 R\jǣWvH&H呰(>j|{)YO7O96W"lYCg]-S b^Q&Uym?"γg)'nlM2u~]*3ۣTf: RX2Bz YUBpUA* ӤDpV#ؔ | { Rb&C r;+U^ueYm:) ݭ}u3LJdQk-bY+PE hS,)ʨI2cyMmH*cb|!IOLbP457ąզQI)~É]4azv/jtk**ZY,fm@D׭v'\%[\rOkH RȡTT \9I\Kz bur(D!6b #dhr{VjĘp=)`@ebG""-.-MgdUj,f* 73*϶8/ȸfd[c{pby//hh4}?_f&``Jo^ @Y#J)\ !#р<`וE Es4 ^Ħ&h鵌L nf.mV2 k:jMgIbԮFJmUYj^j.Dʚ%Ɲ>%jIcD UR13#pR* $釄 AM4$kbbA1h0cVRK5tV_*q_$b5%b/? cHSd:C)Jܮ5͸^.֑"QǮV<|uqE{ʙ3u\Qю)Qi'،]8%鸪I=q<4Nx 3r<7%6q2ɦsهA>R$o`+EdbL*μb:T &! r@ 2.LOi].I)\f?VۇmV5:n[kظ{J4c ,Tf`R#)?)Hɤe9,;iFuH8LVfg sr` *hሩH#C6|+x6.rW͹ϖܹu٭W3i"/֣?ʔ$4dh47w[B]z'< TQd k+uJXV$'m5xm92%}H&ΦU@Ьnm#/աV|?M !q$!i K1mvAd-B.T cz¼+O^Cikv_Z^~ms+] Ʒ~L rh q|:JkfR */eR$',jMm\ M^y#;ݵ]J6`c>x?\'^": mw?M嫬/xig7UPsIK, TN(BFt΄XLZ kш;ڧvqL|f(RާAiC ޟ`Aו-= 1aLo7Ewo7^cn2}?鈋*s}'݂ />hpq Cv=+:r!kH O\`;>,1rvNPJ!+6%lAxeP%yefuÎډ3Ӗ 0 )k៦ ~.ga|6[wkf) ;-/ڙi&hfeODxgxm:me/߬ ktuҁIN$}rlss(7]/y,߬}tHp+V?-~˛z`_`頡,gRW?0X3ϖKC]{sݔ28<5.h7L\ V?_^Wfa׉J^RGCbpi,FɌDBRP{P嵐GLbǹ|^ %KIc2d+9̢:J K2!סֳni@HPu!6F58B$ZXŘ_x+4umGsdA{cbڗ:Y(]ĈT0gݳt>AzK~2K,h! 9:<#qh q%I-h&̲&QݛDY@mA'eDZ&6*%:-0ȁ3饋zB.sJÿ*67n>cjcKgP.uF<#D-lStFdcC"W XQO̱D`C"e-2jg4,L[9hPS{uv'ΕhhM A|)Ж5(b~ʃ!H<[kMg;)tKvB8݂~^2:HφaaY|?O^rͱGGτ$hxqN&DDvxא8ICR26FքbnJcR}Q9 i>s}Oʮ / ž. FKu^\o=z&~ ?*)9F#\1F$|CٸN=R{||Zx`@]|_BG<f)bޕq,}@ v7l1>%ɐyW=/i#)RؐNWuȈiDk45d$wC6vz:]ȞOxhM4vMK:eABUԴmFyf1)}Y-W˅kv侸 ]AWchh!\DQF@⃿NavigqshgaTOK՟`_L djUf01 U G)j|_7fU =Y.qɕ9TEƸdt${}٧ieߤ0L) >[vUY)mK>yRsKt)~ޢefI:_ -gʈbnM:Q@zcq1Y<0E5LΟAykd<@~g#nd6{ղOWgq;ע<~3odK]t3%$q,K_ <7PHͻi jv"w0K0<-X ȩ";ˣ;x ?9Yx^6aZ#v!=>l@lr쵲2aDD&pSb+&9NxQ>?I^8_Tapd s&FsU) ^71+g UzC#2q,Iy4_mx`9o&U*j;b*$W1+(@3 (ёT(PoTG0E`aYNf'\tAiUwڬv7]P’Re a)8bN,10(S VhRGujbqB2 *0 WIB1!e,!0F4zgq>[mr.@ hJ?-ɻ,D5e0pHP;G)⢾tm9(f0K"騅+>,@#+O3[n>ZUϠy4RșN(k,*O% 4^@,C CRcH'IpݸI뽤s |!*hKq\YsU`qu]1B0\k1ϣȷ/,rB5 bI(e7l3 /Pi׃ u1L[^#E+-Z:@j3ʵtt^ j5~LRmtdRv`)G7f'U 0-['GܸP7ǎpmy;d&ev o Tф8=w{|mH>7׍~=u.P O{P=nQ+o[e6.|^F6B h!ZJ\mģbGJYUuhUYty]{^6X"EOFm$>+%-/njdOZz N{pS&tᇭE@wKhsO[J4r`o'QPitDgDoGָ,A)˿$Z&.Chȇ1GKPmvZf7 rJRq*wn:>yBzY91 { sywtD?i#S 'f.%俷LP= XoĤ.Ӑ쉹YԠ!dp29|XN=sWϦpm(]թ`^Ȱa:E88JRrm51=bԗ;ĿءLhs=|0ΜiΰE>8x#B1%P$:WT>qPzY^@eX/^68v2Z{Ǟfƅ^q=ClK4L'g#Px3]~K[_7lQkTX7߂J tҐo,0clq<5=9?`~)YjD^p~?-VIc}dTYX(9g.#ػAEܲwYewUoZLP{-h)5!\{lCwV'\jM㽇k,A9X?׻яG巿-ƨ"zhmRsm L,C(j UnH qeqc*h1ܔ{gqoAÌМ-l$*%y"qvX[ϳsݾ),sQj:1{i;xׯgw66wLUl kgfB`1XZ'1^r-" `BK 1OxJ¶,%, 6 aN -E f-@![- X~y[=S'J%,B*'FfNAL&,d6(D|S%(Pʍ"VFudXDA/sGkZ ?D h.}-tA/*}>R1y3-ha`K$&8@@kzu)L_MpK.?XH_a1RKP-jW-L?jC`c݂;R ), LY*@vX^vT\NQ|gQhĞŽØp,H,Bq奱RG/)$PC"P01V )3TX5BLhUb)j XD ӌ/ K GEH;>cElrdEFT(P`dh<ԣ-r(( 6o x$ )[X1cש^Dk45d$whV6vzfU4xlk^ BwӱOl~a]6#h<v,ZޢjpͮE0W㗥@hV')}z7^~s" -`V;( |:=٧>e-ÇQ=-ބt_L djUJeޗ8O! |ݘU!`kY.qɕ9TEƸdt${}?!F{&O[oo&k_dV;]UVhlʇmOdԷy>9]--qRaXrU'eE[:\ApZz/W6-!oszZKSZ_d'km -&^[T 1cFϟFn01gmladj^ڟl>|x}ՃnyXfݱɛg#?I^8_Tapd s&FH,jCYm6tV: Նj_ QK _6q#򗽽ڑxS*$&TWaץЀ̘"\ )Q4Ǯę!3h<]`@t4FjENXU&U 7"ٻKpƵ\=ig uj61ZF.S>:GQ(\.st -Щ`O,'6׃R|`kCK.]_=<)֛Kcnv;\LYg n7RZy9d Eq"A`x<`i2@Ir(\s T$#YIJ`&@gp.A I2YF*xf[VsC:l,[FX'uΑk -9A;dGBff@N>Nޟkyj= 8|bxVσך'U4̛2sL.S\oՙZ{* g]%Hzc^w-ZlvW ǡ8Cw INQב8-zBv/uŀW}XaCv@Td9d$OMV`+ڐr&:NC;0HgiO>ͺ7WS4y'ܱ7iZ{~Ǵ"^];+{?6S\8?%E=P;Aw㛓x>L]:b~.۳uKMotawB1}Eq9_JSYBVgN5$eC+o4D.誡okAߪvBPlUvW]vjw 49۩[ԶEoE>|m•&VjMRrC%W'FerK77^\%'puj6`bC} l/ױ>{45)CJG :$5[ȖSnGw,`nZ%Gٗ ]gC~xܖ25ioJw=$ىGOҰ6#ΧߜكU{i;(z;]+|~Gޫ XҚĵwect fhdPU쉿ؑi'N.!9ag9Փ[y|r}"Ҷ,+ 5WjTkrly@ u=`V(&Rk]P.8Ќb:R_,wH}~C-uOc)( N sJL`rL"jmjFK:1Tl[k s,+J`19xt.$d&ㅕAH[J9wC&Zm*mr:sTw@أ=x` Kj~} a?pyzv6$1=ߟ~Ү]Ii$좖L3I^h5q.ʗ[;/g_V ]:P"8Ut6 ]]zkbߛ|XKG D:FMG3?OOu>;:Owb]t:Qt>厚«2vFMyյg{_u_ f}6!L梨 nW^ʻ*QTVGQzX{X{X9!2X\iDs2r\`ފ,C^&>R=\pBW3_2eS*CNCj}p"' {BvOSe.e$c!1*@y1dʎy2@aUPջlCbUp@ A &恬WQ ڛ"&9hY4^Eϒfj~jJ~~Y`cmeLD63-kh)BS Iq%Xc3A:aWGWp59_]+30/hjEGJyc{2ĮO k l}Ʃ7+W*HM"mpGq{Qqї(nOYڧzg{.M`XZ :5KɣvhQq'[6$}atRYr eD0` b *B9C^gt c3qv ,/ѴG(G\1Y$Sʍ]գ2F)_3':_]x},+ZSx'㓷ͳnLzϫ~zp'i[{u{Om݈njfSY^cj|'ؐ{M's|:^n9'[{l{l^_(im#59Ϸy,M0fyTѬ7@liTNOxڜ[R?߼o_|w7W}uh=%aVƒ6G=뿽qeg]S뮥֦˻^=Lvy-~QJ^iz̎\oF?/bYݻX+uQ!GHe OWHl2fxACv nTp2b !V) Wz%81S|Zo sIC|B_Ƶj]J~;"U[ROVЋqX`B6%gizhK1Ŗb)4NRB)dTYd ON<!S,yz|J]Jˀч8@!a0}IT&,ag8Druvo2)M a)e)c6p NHPQK"sR,M0Df, TVɋcPX$TZ$cjgmVsrKӼ0 r.2ߙLƏ_O9psrS"7ǥI9}\lCÝQEC-@{!) lJmD)ce.9̢]v%݈fS1ڝiǾ:Fm5nxɣLYĄ"d5Xmu6FRy*)!R'04!Ex6>8jh9PML,jT00+d 1v&xXk.}Ac@" Ø%μ+tF |ml:Y˵(O L1mb@rрgh!5r#Y$EC, n]JD:..qi3-MǸ\pqs3޵Чcu9fsYdgdcKBRڵ߯zfcHCi Ȗ9ꚙ_UWW M|@܃Z+" V縵@AH&ORǘŮ֤c[{hZz;{6okSd5rb?F*UEA\xG uMFj/ZHtVqȹQyHg=} iؾZNAFB1lj͡Cȇ۝ґ|D `&O8ÁQ!r1Z}gYe~xxW3˛-2zB]PE5f49vwcvv`^U1ۍK)Eə$ 9˳ \h K=+{]*=:~ۿD\\e=.U*zݬ^[ Õ [CQeϠ"~V_X(&Aߙގ'= qP-iUAB7+mZN P_G9ّV5Nm5%aB~_/kQ.gsiF8쌰߾n0M&G( bwQV9A4]|{"#2Ӓ!p4f:K鱘̭eՃ7oL Gd2xUfWṋ̃]7W8UEsb\e֮+dXUɮ+]Ise-hP$%^,> Jʖ9|8s3bp:` /<){TlC'ܗPMSdi4O?j۬xqyO3% ZMeov,W[N-^Q4(I1;GC YX$܎z)Vq0c~{w_q>M-d0eE<j]pNDƷ#ZDŀ*}a3TL`9Hjrdu^C ZJ$-9\- |<.h44\ToޡDܲ@~AUz{k~{09HӾ 2>>+f{7\*{oг7k̅=W?W5G-pcSD."<<\ܫjx_:N-1kܛ?ٷk|V{=?//ǝ띸/cx Plg[©M)z8F#v"P%/o?7F|~cmj^d\jI@0f$ـ|)ld$0lXŕ4;,O4!g ~Oj?ߖ(<⢓2q;N~@"^\4/N]ޜm=w% ѱg < $$ 6^OGo_̸$~ԏ*1X_̆ej2A02A|PFJe~)!'Jw1'\ig 1YdkoB@T?" *^BጋG5BYb ;D6:"b|D ePvzoi0UkpyM.̳孴6=W~~-]( [s!j.\d;765Ig_4|[/.ptyytQ^Md ՛Ciqd#g7'>xɳ契!j77SϚuLs`hn6t}5UŘLO»o}.0}@6)}EffxT[qmD"U-*`Ox]D8Gy{?*-JEG\!V^ J j-KhCibu)[r 9F;ǸRP dC)™M.wmg^gա 9~ڒ8w\*m&.*@ۻ0ra4u~Gq\O>/T]]E9VS[84%vL٥'v֎fj?AH3yP[rсZys % In7Lg: "Wq>I!Fͥ@ !l &;їP܌.뒁Ahid(KnǾ)(>SKsқT  qِ8$iI)͊!'Y2 ɭMLh*#PJ gNϝL& "Đ!ힴtISהwzx$$dBH@БjHLTh#(zFSlK3ڄPg_g>+vt@EG*? ?"iMJ֢3M"FUnq^QL^n|:f H9 >ȵHbNGD{{(suH]E~UڛXXkB[9"ԇ4Z pऎ;O.QASoˀ |[MACK"$ Ɍ D'4&V~NkIY#KǼ%bHAic ӄHt`AZ{UpI}J5v }hz/S`ħ&RҤQWhDCBZ/5(2zeهt< N1pki@Fx:&1YD\ &i yJ60rEZ9)@(O݌*nw=|ޚ;|N6<|>Q{:ꦬW[ 9.*< s)#\zKιt|sFI|\hGV%xᵦVDCQIx2-Zkk%>8⣆#+ذjTqrb/VL /E 1Hp n+u vMw`_2P1nLi4 ͇8$Qo),D0&PƄH;&q'SZDSCT%r@5v4Z.2Vx1Aɼ٠~׊S|Gw~%wWO/=\?\7ww}F  -nޯ {-O߂Uw]PN{$hלZX.@$\;t,CQ=]uʩR wy[#liNލ,8%3 ,o[`!ܟ~\]Cz@ƭN2#`\QƬ% iHsjHOL!+_PQXz2MmGI[cgҋ..Y! 9m._ؠ<{Zi;p+n~?†{pLZ !rrDSqDy c".2B qJ{ӎ<WK-PǒLQ0dJ$qhP 5)-O+'u/ܟ?< %qz`qZJ $:*@|D|2NV"O;އ :)h8d0g=[କL0YAI+77'M0b')GЀb$-] n&ތoaƷ9-{ڲUrE;J .uNOu owXgcP0˲r ;ʁ2E-sp<_s&1DGu &ԥyC)S"S$맣2MHb:)lr!j ! <& IDUDW&lDw7@6"6QRMyx,MdB]R[ HN6<՟E^*>.'IՒX_RWu̮ ~Y|^Bya^RZjΎ57gәtә#gbҀɠl]_!V.!<~nʏ&ovO R$u~8b>wU/P^F j't9YwM{[&; xxW[=zYϊQ5>|R @X«NW 騞+)2͍ Sd^ F+"s3va籪éC9uԤf?py-)_ػ{t69o: Y \:y!N\[NHR͌^\.@ҢҼsҢ;57|M I+'91.2ϨÝ ڤ-JYY^oe&4G#uJoj\]TV69JZI:Xڒ+5IM)?eHj&坏o*╃I[*0y(!>?h2_sj0f`AV3Z~hGVtʓr1+a=T"G+7 CP]) |b!CŅSwr"tOlo:C<ًCS/-] Y|. +ԞhkW3|l YAՇ`}lC Fs;1U*_5(tpp#R &/Ŧ\{4p$~:qGQMK}Z]5٢j>;^|a0F-ˆaN( G^FYS^0^ղ%)ֶ$-kof.N! >lclg==osp2ԹHVe'׵hVy?.uG#0TTeڧ{7ǕbcPsaQ{cz㛟7~zMׇ߼/fY1AEu(-鿽MbMc{˦9;4t^/^2KvyMoiWސف_qy(8U>k͉GZ5۳@(ຽ j~<Lqnq%tq!9N@<r$4[CP" 6p0\ϑ؃r$)9G"Fx..0q!kƤ1L&#aG"|2g,9co6,!nk:= !W9{tXP'`7P\r,%$WdSLSNĆNTu.lۜtcқ=vt:{\cVn;OCihgl|9a% rdJdAyB&0H H.@E]kϣ+HDЩ$rHYgt nDS*Z]U(3JXTu5;|;7󖞻>wtkeO?'_?5Ck@"Z8׊hdy5ݎnVn랕'g`G..$r6-8ʢ@;Eeu0hޮ+b!&N:^ ύ'mMǯHBe b*I03g-.O*gQʓ>Y)4iu_]..;*4"k([=k0\kŶgr۞<AkQ%0ִ$G3$ǓxZ[jLϗ/ڤL/'o<G?Ǔl+f.1XIJ\%#_hk?Ƴ+l%"vqzHGr+ѳf$Ίϼ8hы| ?kr& mvu2Zo덌}ɫ2l\AeUwGͷ_6Uŕ_hrW2~g6-`wovIG:SWhV5*x>Y .z͇^R`qӄ[A:!G:*/#S -#Ig$s^0#2ˬϬ(6WuZ}̎t8vb}X|R`{\ ۥf;O%xJhi z.}R+DFL!N.$74Oa-oՑ.*fqu\;%04Uα U㨚}b0WUad^=Y7>8ɲ_vݯ'ߢn ;7 Z.~Mzwޏ/Ͻl3_/]/Sfcuکr !]T+|LjiySd>jZDbac;I6ڄZ9f&${*ιNԌ@Dq/K\8Ah42'Ԓ$`EHQa@51'(w[ "HL'YíNiTbUFR($}ͳe!lޞoz6toop8<<~qGFlj!hx*P RQ$mMs􆺲́'!=rg6^ DtBnG$&LF%J'(%ƣpbv͎MQ[Fm٣vo49<&E"a1dR{xΔ0d!RB(I*mIE. d(F x"p9I<.$GRkb܎ _f` "="4!"hNJ.ъ:fܦ<r"UFxYѕ-ҍr(&)Q 2θ6Gd ՆZi8Wgߨڊ(|DHE@nOoRNAFBdq'[4br"9DKM"q%85W.FXg3fn=g< \m֋+].=ćg=wW(Mӣ/'njuu45^M Ӑ x"IBM)٢ ɁFDTcE(I.R3HR^qJJ1qnk@',͸W\P^)ZWi%+dg2dqٕ/YZ ]3<Ì/LEf0;'ͰWS; ne0ylŶ.9*~ƣ6Kأ?]%ml/KKMN6bޯE+exǭz@cyޱ%:m.'ן6uݫg@{~bt ߩӨ ZR\Xd$FKQ{fe4LX MA.1țZ]_m5ʒ~,""`i5tάfn?0QãtuRn(E,Q "dͳ7 nk+%k&JvJWHW{KxMm̷ds.qrxG`ۂzUވdQa߈dwZͽ;vtdxǫG⍬Ğ]M ZJ RWfJ֭'0EWƭ&ZLJNWc+Y]f5t5:^ ]M1:]}Uhߠ%i?=eGE?jhz 9<&9tJQ>J6y&U9zvMa5kvP޿ȃUtΥ5ՄD{tbt*9/ѯX&ܰs'JKJWG0l{~u`2Ͻc7oZ~\فDCOVDW0\OkΈ2RկBW=;v~ >vC{tLN*[Ӯxww'7ރζ~tϻFْA'rB"nE ޮGM7z:QFV=BeCj.zԶk?n-t5wt5Q*]]CO޹+T[?u߿k(ǜ_gD7ݑѴE!WWnbw}wߟ7D^A5#kp὏2ϏO:K`?tv/'_|yԈ1ohe>\]_\"X %Yswmq >.׳ nلz4|엿8k_ϦG|i}qﻝpwFk̔;>iIq￙wo%/dջw܋kKew/>.,@E D ; ݲ03p@O }eymf#' /34>Gz}<az_>-\A|yQm~fzӒzG9+qt(7ײP|I)T 3J1^1owu?>\BxCnh/]!uoA30P#\ގL՘D÷=))-D"8&I05p=c+!ْ SδT )WLnycJȥ4[Fק~)ƎRgt!rg?-4(pơ&amB@F{6S(͙!v-!m=)0JHɢ-An4ʣrC~4I͛o!bi ڊu> 1D&pC: %%=pN1#%ICKAh^01ZOfh!'+Վ W4jЂ}"̜7NL- ]**A2 5Ʊu&fK `?@>R;J\nc0j!shj0>fxh( ud?˿w7s%[šz T,JKT}wP9}zEמ%C"XdGhϝP:K+43YeP4BZ%6 `}P[ `fRm>h-Sλ8XwkD(Q\z-fW"Hmqpl5K%[ɺW6 ]˘ ]QcП^v=Vss"X#Ii1 [yO̡m+"%roPPCohyd[X|.H1ŰOcn.SҜqj`܅p2)LHpm 8), |#"R}7ii_-P$߈d6#,%! ۯw=5(!Ȯ0w|͐jPob!;1m1J0}u@ -%H`gjM<cn-$"XYC7a:?%0;ZLpPg‘UY+( 6\_ Jo7XQ;E4GRF[x=&yBH( :BrAAmu Rq&.R8F J*HO65?GAsyǒ`;CBpRE(Pl,@uvcx dc9WTM/,dPgF$=r_nݢł4ͤb68!99hY1fPTԃSa:'$̿$5`b=bq:Ol~˚l^?ւw`8*f=f~7 fL{T^ZXT~:U] jm@hͪ2AS4<#y䄰 V/+Xq=wyv%R)\+`%V -XFX;K $Yny:EjC<.J;2P@R@"4eP_%@rFElԎ a<Qy@'^A7HXVvm1^5d$ыwAKa{DΈ`\l#\cwa,E$ fZf2̅kʐ 6"QxTD6pY XT6UXhlP>yfcr Z(U _Kst=SdP5j@eV4 [A2O5{%n,LrC6H@C}>EG3Kp03O IMfjhɾ, Wp Wg7|ٞ:x8$˼EAsq^V8H@H7b]6sٸS@H3}mwG(ljY:Z-ݚZsLdG(3!e-6#󌞿^ IoګH#(5n(!/{I˩4dsC*2T2J`*(Hȶ,- = >hJyxHoYo2W+4OV$[RĨ~c(H{]䍡"@ -йgǺ6)68&ӣ+19]n]UbxY3kY u2m(yΤl!&S@PXGv%ZY(=Aц%d(A ʳqĨcP~r?󤫘ͦJ w7%׶a܀Sޞ駟^}w8y5#NFRl;%4&/'7MzNCg^܍d~c>~ɿ]\]^__:}lxaC ̿ /_@j~_/#}z71SۭKDMhm>ڸm/m=4_A)~ZIk{_f.Zwe u͕XuH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': ZcXpz@@<: : N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@rɚ@  hX(T;+8@qkW': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N ux@ Y(A; sOd4JkN#t!$R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N unػ6nlW~X|1.݋ؠMl|%U8E=g4Ʊ$K8ԓ64!9<4PSu^U@PR-{a|= DÍj%,l%Y[6.Z%ةo\nKظe׃j#tU~(\w5B H 7<dleLl̿h2m1> DYst&1AtGWv=U~6sǛS+@i SB x8gKߤov_]*ߘe%,+^{|I$Ӄ4gA_=M$ !YM[HJs$(H,W0"B~apmktB@kɛu:K+|i,o]!`cZCWZxsg/:*t%0Je++D[ *ut(%ҕQu=`X<BSZAؿJIi[DX4µ-4 h-9y%Ҵ8"E>;Ik >{C;K+cdm+Ek #OWꎮ^ ]Vk/c3 h0L7q%,嫳b2йͨX s4/>jo,B #u*`f#esgp}v/CEؕQx<~FbuVq>q}@h w,: "K4쌇"aj9\5љu&#/r `wԧ?_rr[P«#xxW͂,!=FB@ˣk'9[E`yb %jq3N̫r`,e1o~{-A_W)K!"9^x\(a%+Ŀ9wt=鯸{2X2ݛq)+-J# Sܖpu ;0*]Y?bBKsTC[P:me4RM-j ToZ*0~߸L,c&LZDz-g!DVP'e2| *_-:aCp f@?Fz/uvTTUQI!Yj#FE,9S1S)R e' .Zm=ˢ}~[p" izdߋ9XuigЉ;d)8M;[V>z=nP=Bxx}eE /=cO?nVSGQ`@!TYjjYB/ۜW2KYAn a,sA`,fgy=p M~?r~E]WE o``*Ir'}*+2JD1 Zw1)]yM+mb_O{udg8}H =ۡB!Gs.L8iJ n8Dm(>ߘ 4V1e2'3 09Ղ4W0pV $foO؋% /<5ΔM Pò`2M |eh:/1o`B<¾L] 7Ǽц>/8 :bQz뎷*ZX%}/{ףr.VzW'x햞'j Uڿ{3P\RBKkR2D'RȣĽ"&9Ny4$hVEmT=M#Tvk2.RQgsϬԛp_\A`Zƒ} 8 4!NaNK׻44r}kmkg/ч5 A_*q4ұYTu/tȏFQ]Vo2#PKZwy.ve}Z|&/V&]./jW5[]cTgRVm^? 0T"0+T;ò')('B@bQ(}&emP^jJiYCýgB$IC~Ҽ{:0ͨUȔ)Rzdf /MVZC" h8Ѭ{_RRP!jS^Pcp[*D,Mw;+Ҟ=yCXvR 9}׬mV ZvN9Lӵ/R1&;eU*0D3,̹ ^yTZnd`yIؔ3IImڠ F@!l8|X! D3.1&&4ˬQF%eBg6pF&31 ^(`F0K]FR!hT 5pML$( G(|2Fu5̇yK~sx*ؘ|t1b+iB @$*H.ՊzP)>0@9URNQFPQZmwdDԂe}0"h'`I3T pkGe\lLJE0/;^%ee4$D{+ kMJ9JqQ*fT@r[2x)xT18m|h'NyngtUΑz[ >xE:Vqy-zq%R*+"Tl\QA{uяG?p Xm*ףKWQ1 Ӗ:y/06p"sхA>Q$;0HrjaGA0Dxq:NdI\5.) ʕ0iP|JNlTfy@Z,R\reV1'Xr'j"Q2 Ս3 r |YLOH0 3w2T WlQޣwb痋4m_S)ͳe6A'K6Ȝ͂_mPʀcRFBܷ9FG0-{U*ku@%$+3f&0Z3&XdM4Qx<["8Z;>fbaan+:,Kn y%e-G"ZqoC{ɋ<ՂFjx{^|hp› +`!xNEYB F=GבXwtiB ?e%\L<F&3fDӜgIR" r@C^n}Yʵ)21*t$o8.b#IփG:LlV$% Ja˟ bML~. dζrMiXNP8N/zm'{-O|vP!BD v1p"`1@7wp> zl&l/?wYdo ysTq` 6PvvRIzvMBegK7_h?1eVZo7@ tqP ,Q Z5B* ehQ)<ـ|u|3BC,bcspYm a6#h\] ]][Er{}EY{f\(Nh1 9+&0\?n LG . uYL],Rs4{ Unn2CHY@ej{9?yz#Z-^QZykYe@ƕ{Zѫh.L"br<w|ͅO%t M. ==miz*O۲owmH_amnw$<@CU$[뺊/}ͥyI-Iq_c%-$Y3 טF|: kzu$Zor:d9ƳMr$0~8;ohOSګq( ž7=yyG #3;|.yi 8_2]lfh58Z^vZl*R~Ue_ɠakg'Aw:,Keh{f saJPV݅!C5"Ίa!KI;p6h-1 *SP٘uM^ģQZ!94B m'FN&ʩ5ˤAd0'ɜ9嬕>1kujO!K]D ʹd @(1cPBHM( E%e$Qz)7(RV:ѓhStxʐ3&s+ ;H!0B 0jUZa>OZ i_J%OYD'I"4d R0e@-K*ܐfEi PU8(Y__JjKŬ5zͅF5dCNX">hGvͿ[!MFiY5}`qBeB &q fmPޢ&aooo6jcPЮ=ꕦXu}Xo>XPrc2)E%!z ۜhYku].֮]# Dv. 'o[~T烗{:LUy9/WR6Y;+D;+\s$k8J).yGHS-̾8|yGɻJ=o0gEtORI[/_Vg~x1rCngz%jI9adbWÂicUGS$ڣ6[wQnt}qA1 -@Z rFFq/[,`/`]gZ+v}c#"kVi{O:53Skץ:&Y94qZZPZ?OWY}˦՛Aݲ2k]Foi9P<^^܈aF"9`Cpνplj- nTI[e46ldAx'էFQk0y:5]c"gɈr-d'C̢^T2eBCP/aMyjR\ *DC hg̓( });<"d28x9JdLk .bD *3֮T>4%2t!% 9:"#I h-3cˉ>Ԗd"DtKeiR"XZ%sҖ.s_W.77mt~*'1ȩLrn}k{yhJsEY2^ܿ(L664J s8E%7!6bTkP*E_CO=Nf'. 12eMLZ$ɴk+)Q3&q$Ȉĸ#A dRfe G<7)G_T#{銕B:!CVQpU8biKgwLi:](짮2j5:¢nDVأ'BG(>:CjLSL,Hm솔ī4o~Vxs) Y# 8 #f1# ;FZꐪnPVu9T;@m!!=ugw=iPw"XwdbjOIn }ȶ_O7ɻB;plq_,c4-uv͵母MźA[H뉐enĖ)BOe>ɴ<M!vmZ(8k~y*M<{IWf?\grv2}8::/'$o#URh>\>~Qp\jWpqTZq`NrR$(dB9β,mƝ_g |u%8 -, N- ΢{I]L®\LzZtNKlny}ť_mry6t5jVw|=|*^9sh#u/č/8v#*x prXOF5vm4oHbc& T#~>ₗ8^MSXAz/zw}3ߍ۷Ŕx !#Wlƍh^ҴX?9Vll1xE2:0PPS' ?IGa 'r ,a%M+BJ傴.E嵧uLzY"b{Chd?:sm}̈3_ 3ן8BA-'8x牠TLgr3n66ϖ5-;ShJ$ҫ"Dō>Xo8״sJn:ҹg5ˑr"&^Eg ACIF6sKs_˨5a@eH1,\X C8FF$:ԻPr5J'AW1A>wfxAb=~Xּ?7sc-`X0ycg8ShWcq#%: `qI@D9XkhK6C@1xm5jgm)yA2 rPR:BYN @XVU1pY#YnS) $Tɝ/*H1HpjoU#I)iyB>spoOYқ!zýZHvn`Co<)y"Øf⁀IrQ ,&P GB &H>IƢ#~⩣x&؃<)oSV`rШH2Fii"FsyB$H CȔ)p݅QWBZQA5zoE*MC'WWmYm#!-B 貣̵ ?/A=dmzTj[vw7 o\s֩v[% =u | r,1(֞r󁋱 368l`P&6ؗt:Vw/s{Ťl it4JMH (|qz| B;“84m\~ezM't6awPtivNƳ=}ms']1txe#ǫ[&hj?{n»قMpvQؑ{cüyfƆ;]?)޵q$2KblF{W c{:>l cTH*~gHD&[Т/՗$ \&gr N#uc\{`p0gIP(]=X^r N8`Z(!s|%F HH1I@W<*AU$&*{+zytZ+eP ˠ,^*Qe~a籪ap̎͜6fRB_b/nP C?W}{GܼtGߝZO';Nx[5G6zl2Bk*giz2q]h|W|n5ϧ`y C@.p%sR3mGA0y:h(O`fVfQ//L_ y,X3ZFOۯ/b{6Lv8NFW=gz=,8b,ǒx15kR涆c-OzI4,3iZ"f5~Fhz53gXhn~yD!)4?hgA,594ٜ(b@b;S# lW}Rrx> }4g_uf:?Y1y%['nh-ӵA3#D*gf|$fFqfF9fFQrKRP[&$ɕQ@!:a1.2ϨÝ S2lIx*Qb2XグBأHFbP<>/&:, .w)Ilv<>9\/fN8|BMe~sԜNNPg7ʢ?2jj3EkFn#mV :PNZ;I8:%}Eӥ.F=K')K,Rs!pf#/ EBK)gW9*95]J)r΂H8Q(&6"08p;0^`q.3OF]sA!A2&M;lRy.fMk4)8;sgBnqJ~|~GϲWLˍ'K4:Gsibhe)wQx &CMJF,f5t\*OP394Zj߿KV|^: *2%O^h`EL($S 8 45"ZTz8}>6 *{q0=&Q_ZYYP y c8#ZQO¸pJ`v_ |VMZ b +Up\TRa؏Z%KB%@Qu!@#(W 1SAL,ЗX M JI}_$ä\}3oͳ8̶Ur~:VB#`vsj0c`q0&&1}<8 Gr!%~H!a?r~hw?8oyg()` 7dh; ZaL_'燱o{ ++pU%o^DC\!.=7\@I:8=+(&MUoІ*[ fw].l"'<#gMנJ)x.,$=)l b8?yFKgh> 37muOOWlQ} /5e,j9zc.lTISqkYՒ+[W5V7re3 jqaOєˆ5uεw:}E :λq# .G>?*=հ}磊bcPkaAv{g7'_u͋'/^9&N^~͏/|#0i.W"k(-/Yi^ilo4FJtnwT+V_& n/^ «W+͉GC57ɀӯبmNo&}KT*kmb2xm$t[GGf%Gڲ07OأHR<κ@ !8c>Fܛ 3ȫsDOؐLT+].b91:;ԉ M0TF+rSLS!Ć!TmNv:C;K;wU/ HSM9J Fg[)Qs< BT%Bm`*QfE x)GӚNͩJ%cuwFJy7?"u="X 5%<`` La qƨ3:7"KUnԫ UR)O::|c}bW%rټ R5s۴qbZ:[e9W @ku>gm׃"|a?κ/-N4ixN또cG :Myve̗cgzM9njjLSfkW"h5q"ES;⡎ d4Bs^,S -#,Kg$s:a0~y0ڰ۝Q{*d?pWZL2I~I;ٵx7""s3DYq,]'Rj'}D"߇;W(|g*Yw+>6KpRe{zp%4\a,7$ )#5pR҅{zBp@,©bmu q MsX3hN<;Uh\ָ,);Up37D]iX2NW{A)W{y0)! rg,],{zpT)K{v(&]eqvpP.D5*ȮUVt {zBp4`?lYsIvJzT]8/K| d={~M51~0JLۘ"o?Q;qB}<`C/=E6U!چղ݄i6V?/JT(dɑ=d-u^43b9DIL8'H8'H'8' QdLI)2ąm̕d D&}a+h:F恀jb NP*DfZ#hnuMntJ; _W/&ΚBvXde/ݹ T*e)Gਊ!8 Q9I"KLLDE5 x.Cj@2>4e(*f\rAQ& +j1qVx7%2JЋ uu!WRzcE ,+J' .Mjj&'V~k#=7ϖ NQT~qG0m QjaHtd\K@kg60w!)HQdTS9*8e=у1$X1@jcPTqK6!HFb<|bXXlelYcb\ܷeW+o6J_?^=j{WD{O r9E7Sf(gbFOMLp7ԕEl< Yl;8˰ɕJ8%$Ą¨c"^CiRijC8<ԵQ(g1<4GJdگaw-8{WE`/f2=uDeˮI\= _*#i;ҙveD*$}EH,J**!D<`Bȉ JDG!Ʃ{oίBlX!Фg'tkoJo^{} @!ZKSHee1< sQ-LR N$B($9R 5HZ"xD1*adc& zv<7ެ}M7ʇ9ŜaY,Ö2|Rʂ3_v׳FZ4:R:,t9Poymq}A|΅^Ĺ^|g'eͦ ^[ykMɷ,L@?₃\f͸{>nS񉳖r@h<{Asv +|C{ޜqݥ}=&!ŹUJ\1%Z+ӓ 5jSɵimj,jw]C4_M0| ~MA |;M&hۨ=I`碁O |O2+P40LE=갳psZ8.M). Kz#~faJ7ђ>ᮻ[M{Z`xw89EJ-UBxlTt~b{bPNn1ݜ)WL͔;D/Sn2ag<Zhoe g#V`ԪĤ4h 5Gŧ;>&*{lFw↜7KAr f$R<Τaca cp%~\v/N?+?`qb-$9r,/,Ƿ _MCEJ{݋Y+6xMPICɐZ7ðkHP҄V` %!VK$ S(sq":P9({#hl|sB̀EhqYתeMP[p@%bR9 ؔi|-]B].5rh= Tc%DڷgJi5tX-̆>j@%W48,sb(P-"+D\D3u0Sj"PL&\jڌVdV.OB+o:m Z`: zbI13$,?t1YHcK&GpeO!CJ9ZjHDfϩ=F2&XX !hlTutTfu%"8Qª}l˩3<.c n1Y&c2.LtޖV0at„MHb1(uB`wXr4yfKjn8Cm L= ٖUŻtf_}ȝ'Wu x]DۥBK--ހbL%2["CA*`(cDSr:EU@$3yB-,N̸u [C2A2(?Z~2SĆ3fAl/"&JcwoKA]N~ ~?KB4_>^p3['tt@vW?OɹNoG-`jIlX,Y_2dm l0Tx@Uk"0p2S*`EqC MR%P '8ǔȵAW|SbM1(N\M-ylsS[%Q5 碃aUB*+BuF4ЄLGuU/J:~>UaOeie?_rjK9 q4B~Aguץ '|WsåFkx?D@灆̖b.hBZn-rР'A6B}+B,\<\ݺ<3 󤶱j})~b )kõՌs}ޭu{em9xMo(兘n}- y6Ws;?\[ixW<|d9kN(gvrfx|s/B[vf}h=†[ݜ{s8҉o hqupjj.oy궩-I7fzo_r}j񶗞}fZ€E"mwL{D(Y.ncJHRP*hum;]+%?9]`,qU͇2V|% yA*fRN?_>4{PVޅ`jKEڡqCj*<0Ј#zC&-VzMfM%l/cū.sINCEx~Mk:G9@)e& "p9>;ԙs4f-43,qkrWy#1kKJ# $g\Kr1즸 LnGq)|`JgۑI3" ڮ&9uN3ԛQVq?zJpL8Mx?5{vP4gKro"z @O2?yf[S""%&DGzANAL&,d6(D܇| JqTQ( "HrbrX+?B"vB-RA}36Ni+!ԯhkSݦ`?[M45/Z?W*>3A%^ l.s/ H\qsƍύ 85Q [ t/Λű׊ȋc/ZLjB˕O *(1D́̂&+ Toq6kA{X: > r,Z"38Jh`E߈-q8:"XfJJ#7JǨf6+d+T lWY}YT]oiu]9C~t:ST loLr-ΙS8#c$Qls:E8vIS=ilRYtc/Rf$J+N 1 X@TaebJ F(eD:=M9@L1jb(K)O< ^#^9;*3i (k>Xz4p̺E$S*؁4 ԃqچ&1I_ubW*)TDdEwZI%``#AyFX0T QP:g4r/Ywa0%*:Uh R! L`ZgV'}#5-Y.iR!x(J(pZx.P1@̕.Hn7Ri,#\y,bĆ|VxR-DX2mD8 m,wd볌w3-8%,[2O*)ag1RLƒivqgx`:YA'&&3,FHSJFUzF)lċl g"1[ vȪT; dAA :J:uf<׶w:> }WצS.À^iatH\\ setнH"+.g㧡g&υusSEț駪B Cyr9C~B \ 85JOts.6 9q G]aTɏҜYO?^_?\}~C\˻odVH$0RO {CچCSvZ:g]-Dq7.sN/ٞc헷Co۽iG8~vjA!1vuKV/zc8.VʆTTIw b^d3`(^D%)):zcѐUb ܬ#þIf\GB9ʼ5G"D)Bp D H1OramAƆˋ浆g5 3!r)bBRdB$(gaOxFT{тfU>w:iT6xbl՛y=N]y0ʳ2ʳVn;/#@ v`\&M,k"kp:,7:Ys!W!Ahbjڠ O 7(5(nDPeYo@ "#رL`.P #S-2XGР<`MѵΗ$t>!i|R|)sʮ~ϐzs|e &ktD`S\%PK ozD%GmW,KR~,*QUURV\BqũDJ+t4*[jMWJҊW)txA]|lRr^y,eB9!~ٿ#pR6c= gr~PHLӉ\"EL'j9kNT6 kӒ Ə)g<A%$^j od+ű"!,]%r5;qjJ*[q ŕZI8pZ3''Wܗhݔ0~ U>zC#2q,IcFz q0ٱ(AY02:\ޜ@`Hʱe0R{PәvZ+oT6*FA{T 6ߝ "'K) ғ6g |ɲ[p/]T{72˟Ûw_1Ak 3R$Ti´7-|4&. 'Y[z Lc@cO|Ԩm7,ܘM_leha? =Dԣ@Qݚ]>L 4L"kKu񴁫F xO l>vj{N;aKކo0wq6SD#= :@"'ɰ_>-eq4ήT\? a 3 WG{>b0Bļ& lYq#o7Pp-= s_J׮Iu\7RZָpe=6]#!J!fX# # Hsko>E1LWuL@ 1+cmꜣf1_~.(S=e vZԯ[pl wBT[5:˔ɧi%CL!0-ҫ-7/$\z6RW?M T3| Eޡ`@m7)܌z*i^$77kܰ@U{hΠmV=&\:.D.*7w[ՀEz* }"KFkRS䌽ngS6j_=c15Z#lWDȰa:E8TGi˵L[K-qeaNr4g"C@YED͢R]IqQr' RZQZxlX~S0TV(cG% :d.KpOJU k!sr,A(7qxbcMwlrBIdkRۤA"(QM4˗C_zLX.]/i\.+03s)9KQf]Q&/|E TZgDJMNqWuBsb%5a4 cD$$lϲsCTE ۗ2vB/f@l)nZY2R$$3E` i:K/EI[[]Sӝ!&f XׄJ$nhHJ'o9q(\Y"]h5_EfZuKL :+YW$zaedb+uU"NZ-i>?h_' sre׃lԀ5Wa9!TWjTHqKD8d4Q1w&N$㞟$ 2'{8v )@*GLdb:r谌 ,:`0: S ~lS/Yu#ҜQ'jnu {W\)`Ȓm/,jr}nM=mk76K'i9j69.=m=9rqE`0Ns#̉4[sL'?-M~1N^jjIAgKo餫݌U\,oIN}l764=?ost2Zts햼k[V7jǽVoZ/>u4hy:kʫa*%մu6畋Rc n1bI{cox߷_~Cj>|;q@= iKD>pf!gMOwoZ4nZZCy׫]dHo(nF<~Z~ncd=[Ǭ5+ͬٮ`YD+U9Ӕ^W'un'IjLeb#݀Yx[ EsUan͑ăr$8Gb|H"̂ĚdLPYJd"Gd\ؘ.8Άɫ}k;&n(5p*QF2H'tqB,&aM5 mLlϡߠxv戝+7]"vUCӄ(2ɼ-/ȬݩݧG-^62wjՖ Cmud<h-s|"NDp,$O,HHcAs<,dҹYx6f ܔP2dK+ȺZ}!>7dי;>?žF11 be!*av*8 3I&[- 7|EՋ6Eɫ.3uEImLggX mJ%rWG߿Ua]ݼ[\^4i&I&kHFѳfμ$Qoq݋?}ДlciˀAMqrA6P) +kw~_*mlWwj/N(;^&j}:mwd}jz6Ҳ q\ѹW=c/*t9ۈz:~ӋԫPt'Xp@?}xǎlX@fJO7 Hr/߾vnGV=@|y+Y* AXߔk側W&bL(WzlqY` 5w]e(Һ*ҺClYc̗QiE 1R;Y@.kLF8hUr;{v(DD͸*\(*,!TIi`~Xb*s4C}55o ;ːqAJ=; {&cn4ٻymG_6*3*M 0,%#:fטRP8'J[P Ea Uʌ8QrqM]D׊W <"R#'R0س&튊Ge7K=#XROϊlWR6N{bvxԖaQwfFM:/ykJMzhwO|~Djo_a͜A!#^b ^y+e1& )M\&L XQ[S!֦, Ӄ2) dK2zIs-= |&#շeM-cX7[` n$*Xdܱl܀䛟b<ڏӏw{dLqLH"X&D4c1z\lKh3ZlieB<5+CR)R5L܎Y8s̢}'K-vN+VѱV=[m5X4%fȄ"d5Xm5j# y U̥DNZahì1XšMhАIȢ(l$g>DK aonR9+qW,bocE|Q,te7iJh/c36 \rr *c "]ֆ( )D HȍDHbC,n}K;.#=@vq^k|m..v}(b"C/I SF&o9W!9ȑe,IEq]mMr5?/#Twq6 CDяD?c_+/| KJozB|{ezNbrqܞsDX/dsȸY4 U %l)BV'7 'Z9{'Kǰ |xsW.L}>JNr$]U!dM|mmZ$ U*;rg@o)KKF`L1p)$7 ,Y H0˥ 2+G=qȬU skA[YFh"0J} SDdbMH%ye- b~Ǡ9\?g^+e2i MM:ů1o|Tql̎qJd29}/ryz4 #D!qa.ڥ(Ar<+l#%9%!W+0As_[X \*؇AMjuWhvk(}Vj/8/Z?SIQD k{Ifk1Fk)!kzʹK+o|{lu aY=+Yi喋K.ZvQe(]\ʓE73yƾvɎ@ɺ =FSRE>|rwm2ߘ cEήu{͗kEkbd-';@$9OQW[貾af$#jjV=R vaw_}dGnz{ g᷹^9镛 B;]׋<;ޕC_~هswyߩG湾#7^/Q7{&xL57tuOw|_%EYrWq%L^FHfl6y׈EzZ\I7aaJ&TCu1,f8/s5)! *|aVAA;W]r~?w%HJ)X%r >&6bKTRQ=4:\*zwvY" <.L8;NeI"Yasz;UUKR2J.!`;$?Et1y7$nT{cČ-n:}x2@'d}Pdb.N+lp#dʉk480#BថZ"Ր`LP,' #a$w'F |uzRj-ËlUгgCq<MXam, ?!d$bSJ5D3dF-UaT܁d=?2wH!gw%-.GP) (%䙒Mб&$_ggbZ͖RR ZMyhlQ#@ϔ3OI٥ЌRRhIIB5.&czh9zj&R8&!Q (q!!mͦLEl{VLו ZESW k_qϮ/unPNrvsLv yTؠu6 F y2)䄬0LnM!RuZˬ ZzwxaaTu,rcyógT`q׊^9a$'A\BNBп^iR|Z}~q5da 2<z\{|AYh=J:v~LcߕoNZo^J]NɻΝ9kD8]]?k=W8sRw'P}^[E4vN~ {)MV;kS`S`1. T|3,k82#l (VQ| Q) {2Uea뛑ݥs^n]w;?ܬTf:^E[?t~+voY!|$T}YqݔSm/˟xw_OZrvcI)wR^֫5q+[;l #qn՚Woo֣z<V@KUN5x=)r}KۯA/=7/|G^(4Ӈ /ޤ K̍CVUXٴN "S'{syev$$l {RoTl Up#DW!jۢ\h2VP;շC *T$P Dh.UF)BF+s*jc4x`87Ҍܐđm}'FWe://݆uغ'ߐb,@<#Vw9}h3 X9ݙR̽Fg8GW]` GW]\9RiعUhWWWGUɱUfϮwzEp%!ޏ;O88Vix{=w<:7f}=1wo;'_'>8To 蟝z[ߘ)w&&cZT=ۣA.nǂ*-/hv),( Q98$GW] ٩fw)z=p%b  \uq^6,pz*\3ţaW]Cp.%/ˁ+q o|7>O\|ϓ]},+ :;"4pzW]J \Bbwh{oeY0=n޳2g/ Dk̸Yt&N~9 >mJ+.R̓¯c{7;P!/O`6bE\!B'-oXI +m*H1Lغs-޽Olº+ǶO^%mXۆ;=Iֺk ~Wo3z-R_$%[X,JKȕl>C-.ؠ-e*)(eI\gA/CAW!]6=>sg0E\Th".]\q#.]JKF\rlبKGfӔ~@lMntf )E, K/a-ے̨%YzmiIXƓ*>9&+1Jclp#dʉb80#B2jK$չR ֲ8^z$qt1Q<v }y=(u~9m*ƽ{QC*gi5UY}_wChd!DthKDZR}ا*a5X0˝ E\,ZV,YJɕ!P;EY T+r6JeQζyA#M(Ugbpl8;ʮ~fRRT==cXR),bBJ.F9EtЭuF9C@N?7OUJ%*Qڰf 8fuuR( [܌Kj Zl?HZVmZa@\pxⲐ lD1CgąhTฬWoṊq4 ;0PHjBJ>kJg14}ZA bm""*E>L:Зs1a*ToS{';iuEC[ 'O+9G! ؔR̭4Y&qKƦ|X0 yu`kgoOH/VLKn]/IHXr'7Yml uq~+ofA))}T )"ã$G3SR`v))4ZRR(hMXZ+IC+lgJ\HaF[iDyD9Ğu 0*b'W^r張Ud}4(*짬[,dJ|v'G ;xH\bL2eKLZGK5,b]8\Y%h$yc.1 d@OT̢>:!H -?B ![jm5I%kdwk:d;??_ֲ 7¢;lܷp<BGKi a_ h"!qtKIȸ*)e;M s%s>c愌(u)n1[ԣheQoy:OКR斅6y|:|t*!uJ" |)+'g:0擳Nik)H44ٻ`y`){Nv+ " ~nk_kF{m^iY֪A\n_u!.o2 +c(~(?qu~ܮnMqu;mpF\i8J (HHf!,$YFʬX*G=!R[|JZea 0q\f>'Kɖmp'ޔRY6p V~` cN"K.Ё3Z=ܪi|O0%z7(Y]>t_(ny8ѡSJLZ %OdT2L}Wvt-;zӺNHb1G,d/+I\p(Ar<+yB-:Ey4Ц 0K+_,xj2!0> Ԉo[RC>Re־K?f=,BE !dTD|9 #UΪAK3B6:aY=J3v:Sfgp=&Cnۿz%bLOhL_87Ѱפ +s|4mv1' p7 iCo&tUD,`_=^KHE/;I iNfwץ׏۠Gc.[ u=Һ=-`rO6HݹuԲ՜[vyuyn~;[e7z c;>[OL1BQhD3ZYK>ܳ=O=.m9R\H|'dYP \3d* yZ_aa2 Z#@M)tL>&0lΉW);n&ޗ6BlO=~ A-%oMѥ5 wM ')H* * jP܈NGWf#lM'ʠ֠}%yVhe;3Hs%]cben[ZſD5a@eH |h3p9L"#]zkFΞj4~# Wy]2.Л#ˉ_[`qOG|^GΔuO.`1Iq#*ʱǽ xP'ɓDxPxy'p / vjf9!R*d(m^YFH=#erJ±V7`RfȱdI[1R(ڜPB!UA(E}lT9{ I#nky&nËjA-w&o1 >U%#븇̂MDԞ3L"Ad%:&@O|8 $C,#~'ӎx$y6SnSV`rV$a:5 $hZA52#S!3slRgҲŵ'hΓokK?E;FdAzsqvοqD^9k7-£U8膢TcO:EFPd_$/z[+T A92HWY:^SPB#DApA肢X*HT2n5̱*ѹDF:V!-]e/5*oO< WEnәӠ =ڃN P2m}k|L=WT[m#:=o,9.s]%A ZoYrus«/WvFWlJ?Vb.*@k !*HM&\t8Ҭ_̝Ԡ2͉q|EǥQ;Ir' RZ6$}aMEtRY> %Z84.=)U :3:2l=O,M7>9Jz#V~Idx*RۇQ<5QkգCg>703sDSr:1Q̎'^xeDڙ9:i5^_T=)97EBƕb%5at(#+ HRu@z5FHZ 2ǚ M4q1Rܼf݇R$(3M"+5]H^i:K?*>Jba h+iXJ,kP$N T~U@-#'•ӅY槍F:ZY:geBF`% Lye:gZ1ч"aePj]oJ,7o:Kί?KJ/67@03XDL-o:H:igG_޼A8Re#A쌶Sup!|lRÑ:p-) ;0t"dBl2%xu?[_\ <W1:s:(\̒\:8/:r˗cŽ :n:#_ EoR͛Ex+L?t;{!ߙgV+rj5t| F)`>tni޺3k^Lk6Qcց]bւ-_z]\VڮG:i]ݥ,jiIAHt4hƪa֙=bLa-JyXjUokr {$Fmkr|9Xbq,]` s.?_` vt'97=O?w?9 q/N? LIf%EtCTԏ2H6Wr Rl|EtZ9*x@x:=v6&]$Y,y%ڤЫ)=寿VIZ0S%`3 W<OuomN,-9sJsGc5 u($TGȢȎxx4Y50uuq=mhsܬWp5e7կrOEhb`E BTUF-ݞ '}s`<*5cN%KmLs+*"k\0yx&޾7ؽC3b~~Rn<~W"Hs:*UҾ{ߛw8:tJwQGUeLJF[_ͮ{r8v`~Wo\ ۤwvZ(FY]ISɘ  FU.4k%bv 0wk^km~ dY=_|TnKҫS>ؠ>Ƿ̏tOȥ0s04g`g\\gkvqWs]~7njoI 3:ק6{SfcNwVC oyvOOގL=+Pz'wE8KH.3\ XX(JIt;EquD'nObĈb(b| N Rk1*+9aR8#l*,AvyߖJ1tz'\tKQƜB"gjPcFi^!Lc1v헿K!LerJJDc.)9GtE-|U2cQzɹ >]k * "M+Qs HV4DL %뉇K^g<֫4]]~Fx({4kUɅC'mTH1(664(.(I%"Edp,$^"S5`raĜ1BDĉMQZ!Iٮ rz~]^ ctyb86 5 j |0H9@~+R9( JK1*xS-j}UɸRHKU,%CR})螓]\Xmv'ďWv{ Ӕp}u8I7Jݧm. tW%PƑʃ '1Z[)Zà38y5!E`%SpX V%Rd0Ф"lg(y|?hg}͸z50CO0g#ϒvY}׽Dm~ݟbPݲd;ۺh$35 ?šd"e@v`gt1AJP[5޺FX]CH0L3D+H J)Bg{X`V7+Lfq3bhZSdbLC\9A6+lE3b[~"NLedW!\y1 * XK{uJ>W vǸH;6SnpJ7+䄫mtJ6+lW,t+bccN\IipE\\Z5fbOG\6},OPD<$]EKcHYY[#wg4_KG%ԾT:2tW'sʿRIvaU){L}2|2ߎ3tY@MH>KLw:ls3]K}MՐb,S(rV]rN]TA?S'TE7zV/!6N{7Ub4*:D7·\;jNR)u}N>.΂!NĊ&H++1:b{AkfFbZbJ7b+o 6˕\ZcjJ4+Fh XuվtqE**=•p#ށ*LP' HvmЃn &R@3fŰZTc40az1mlH5X,6+RXWNi-9,صGf`RaX%N{$W^y)[ VЎur=+W';H\^&Qq\ԾuH%] Wv=h#46+ J X±Uj;jq%W͝\\Xl W,WVpjw nRy 0jpFjldu* #Ҷ+`˕\ZƎ+V9h W+mDH0<{fr+VwY֮W^m#Hd/hX4 ƝoiLV0j;IUV l( \\gZ!ǎ+V W-=e/f@\\ߌuEjW'+o!\`NͬٱZ3XVi݄oW]RFva3hU8WwN% Ձ\oѧ}VTSx!?W'١$;ڇO?1ҫxhN-'娝L=[f>PټUW3^G՛O#FL@r{˲fs5-6t-:;)5.#zZr>bm2KRwDG>? RKUr0S-`!&zeP $qLrǿ|$=4>r};pu ,O7rU0HBx&{U\>h:4*! BAdy:zCT YRKmCY"AU+ i~B.6hH ዽ*LBxsgjzȱ_)6?Hb0=/3[2$y},NoZ*+ݪt=-Y*G"2Io-cx{L*kQלbfBje%rk5T *ckds-6k۫N=hSts6~-6-Ze݇KHZݹRSZ+jkPR1E4TkeQGCwX̘nרfhf(:Tb/9%干{*Z)9|o:sɺ2 lPsՌk x3d,TT2; s@`B. Ul%vN]Q~B AAQ)tJJ*T[!zrg3 d2emB AvE4Ϻz3|Wh%WG e CXg @C$X !a@YQP.34vҾSDUtFt[s6 ƒ5 .0 ֎ AC\ 7)ԈKI0SyDdm_ͺ 蓥8COe,l2V߸ ]{ #AK~u9A[XQ).wK81g]DS8Y},hpDN -I+} y@5^!Bqkڝ; !E5Kh̿[h5L`2Б5{-σE1(EIb&Ҫq4$C VAwõjy !뮒}(4, -flg'JZkqBqtёpibFFJ[wk*Zk7U{9E`!-#-jw$aU6j` fᾠ{ſfգKP05 ڪ ]y,qZ{j |j/Ŵ{L{}l3In@[- B1CM^,4zR(W`_{ |vR%;Y:5 k*fHga}MU^1 b^8P 1xr#5H,ca\$XW[QٔJǨZӲkز`fEJm\2 'E i8 NrlCɵa+f @E=Ap*΃el) ʝvKCPˡf-8H28ĦyŲK7 \- voն\`].?t=+{G5x#W~aAA56Bxٷv+noXn<xPm=˓pyGwo~tknfڔS)ϯy_||[Iqn-4nvIcG?[y}N_L]ޜ||y?7zg^~y}6R?}!_pVn.?em~q*kus/Qn6Hݵ^lv{-wR&U Eg 0WkJ~jap=& FY{J脒@P@@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I6 hد( L7ű'S:$P0a$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@J~=I %ZMh}(C$)&u>J@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I7 <*pMI ,֓\Wo}I &Ib}Z@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $IIfk=k56/\w \wi^ߔ˛'@]ɮ)\Vf5%5|% Ip;.`"U V6^ ] |@]Z;]GNUzv:0sAp}"B}|:%׏t兮vu*S+uBW@뙎) ] ]o>ܜ_yj7tY] ߎ(|Ჾ@o$f{Bŏ~ۡE{u5Jӷ1DOœ.̫yr{}s^__ϭ?M9bCZ}^@o-R {|돗ep~㫻 V]kb"^pшjLMS8ex`C&s O+Y~Ƹ6IzlER@RKT#b6 ]M&> .۵~<JNjUjz5ta#ֺcY+k_u& qbjвUNW@u:Aq[]k\VCW@ͣO+gx+Gֻ5ݻ`11k^] {:!c>nfzW|K،hv7SgдZ4ІIVӡiv&Y]ڭGU`u8J|t7RWpX7Z}&x4F*LX] !W+hM䠋＀`#?'tG0.ˉG~A3ut,['h9h:ɱL h9@LvM{q5{ n-hc}}y|9ސЉc}?ܒP_wmXmb`1f<,&d_xmiZܒ 9m]ZXÏ#XB*P,X1."LѸ,Y%9VОLPʎf/f6>$=$t(W+iEtu{ |9]!J:DRܝk0Do94nQ!UV7~QҴ6Z3"B=tpuk њ]]"]Rm]\EZ#-k<]!JA:@V0Ӧh BJM+@)ꇡ+g+b5_b80^^: 8sahՙ6isW]KE +BZkNWPBiGWHWpH +b[CWt(oիЕ 5tp_T]] ]IJa-+LM{ k ]!ZxBFututeT F=BnF^+ȍ[QɩiM`#miDk/$D2Ҧ%Uk ,1ZNY Q2ҕѣy yW=V +pU8U-+ EWWg2tB? ]I_KrE8+gKW5ͺOjcrl]` 8EWתgD)LGWHWu S[CWWɶt(Y%B +m7UF* ̛ra%`'/^V_-ȼ5r/K:9kpvX+еrKn(!KD}v,߳Fmakp?߫xίHr{[U%kCbi!,d-I옡<+2ZD3 Zw)xI}Ww[f_MudPOc0m@oKz@VpzP&)^op!ʿEkKWk3Z¼geMtg)e/*h|?/Jl83_7ϒ؛G>H2h%[fpweIAӖk 3cbS?z3HC'"Nĥx=6Tr}ݿ?z[[~6?0U!okōƃ,ֻ ;^T!7xLy.%<œOyc;w}BqL|^w7WM$i\X޷2eRJσ,sJגHd:$6N;Sr))x!jS^Pcp[*D,Mw;+Ҟgk{J <\tܿzg>MlnXeWT+}y'nwyrrdG  Rʜ ' nj*r#%I,ScS$I4&Aeh*ن$WL#(^kbB68;jzA4)<( 3}yaܨW'⛬;g"XeM܍۬M8_gx#'Ҵ7O<~pV޽)&SeDtb\[%FCWIg8ˬQF%eBg6pF&31i/0E|.x#}!3A8$Nڹ68˘3$Ʀ0bm1bLj`D 1L# zV+Cۦ4 LʩJPeEڀc:ZI NmV$,# z#N)PP6vM$]ٙG?fs' Vk|T%,a (J]T,1S{\Ips~˧cw˼uē|GƋ%F8ݩR.\tw8|p,OӼf~_0⌜jm6i6'Q18y]S$At3oy' 8=sj\G|Ӯ.q:`rm@u+ Ҵ U9_S5jX:8Z=QKhReCIH6%a` 8bRIȜCv`Ҥ; l Z얻NSEt]O뺞6;20c=j]LD*BUkVL&ii]p!GOypV0up4u5uPFk!h r^y>o iT{)pJs*'ԽѰ68{\ڠ} 69JTb`|d@A% (qoisP5I4g|WlGK9x>#Ww\8kb OoGJ2(a][oˑ+|H]A6/'yJ`Ԓ>THE[–Dpf._UׅqK60Jn3;`j" ]J]͚}ȔH>`3Yl#3$%EbyQRI0#g )dV@R ebE *, Y Tc9kF΁ro?:?Ǩ%dH@{4)Y"%& Em|dX ;(~gQ, (-EjH6=_|\N9 k 8A'/lRĴlF jtF?Fx=26!b0XP*EFJYC- U\vL H3MJւI姆*FB/fñAkK!l2!QDI$[ؑw2'_tN?*@âh[~aɘy2}DB%3I [SsWTeF{#(cE*?랄wX1J@VQJaFL|3Ì*ʽaFZCfTtvfyV/0t׆F2Fw(09۝BAc[33(ZŪM Ѻ"PR,"))j~$m {f&vm:.U '8@kP)i~>z`)% xJJe(sF9-(d.>hT@t 3Y)#h^ȩ$K;EB@3uQѦPخfy.5/Y꧶/aȀ&>ۅv\C>YT41On`OSCԞT|׏y6Y~<~er+9L~orlu#RiʳީRӶsnX7 ~P`-3zN5#z%HeH<rUm)ߗp:(O3|HuEE@7PRQ&z(X)fx!(ZR; HM ^^!@Qzo5HD:TS^Q"C_Zc*q9Gk!E,yI{/[2g^GujpjOj㛛6J}^ 8 of vL͎vRVjv*J fRXkv#[uc5͇]+fns6zU~`D3X~Mk'zwQ𿉐gX6f%&^o}Fkίfc=RMo/ʒ;$ߓi`e!U0,\8?$۬]x*7uڇ)LR N"Ȅi_P)S%]YQ 8v#~v@B( Ulcz7;\'6Otqʎ&Wew6Kyp׷ !_<^|^Yu(e@Ru`CǽaR SFxPyyƣb+(`!#'-|3@E* 1$J (hbQ&6 IQ;&Z%5 bT%]j)*Y~<`*){.jvzfW]\\F P%2Dʨ#CL DU- >˘CVB$D@*GFi;UϨy~6S8rhK4Y %e%kr2%SJ#k!:踇CA:߯pi{` *hKqxRY-;@beLDL(1^;4..x%`ֻb?Z #Igw'p3pp"J z<~ʫ\SNY)%G ͮx 6S=))E(yʅ?O8cz4{Nw!wbyZ|\j }vΊ鄺nl,r>C&l N߉/|^fuӋYy Wa9ՏjG O'O5 nr~%[6?s> O"h;_^)5eݼ)w. w^SO_e{[xЬ)!D憐~)Zg}sϋEZs(_>ik?ϯtlj|DRЇotv\=.rשGCrƒnu\w{fڧ˭7?| )~2RjT7*.{g=oty)Ln*6<~]Y;$a9M ?*e@&,x,-WsZ~>ݗŽ4 gjn=]n{GfGyw \..az3g]mAgGÄw}ĊKlBՕo{Nq'diVr+-"cHm,g8?oxwAK 䳖jgjHP<Ij]phYtcĥI|9 !e5Z&kϰ{ ժD*9 -. 1dD.D&a Q5Qzc9yr(ћƑѷKZG?v1.pu6mw񤱁404=+j6wi KXwE9j%+V6a,~hxQ\Q4e0呴/^"‰ :#윯3] YF HN)m^ Rl׻KLM$)@@G3:yz)1qZ*!n3rI\{<>_n bloӧv~i`9].A bFJxUuL5F)Ib]5t!d%-?#AgQYD% @Yl0hDtD% &fN1$_" zOP0(m4?+Sb|Dɷx?˲½g7}so+ekVђT&v 9d?,eȞ!X4/e hsFy8tl}ma=%DMFN" Č.4DOڕ7[G_~qH )?iXIEyèD1eka  Q@-[A(HmjM L)UN&]Wdγ3ńƟ3 j^Z\oDYl缚uVC5ufU->Mtg`@`sIbtg5yKuAIqb t_//Ӵn,ȫɺi'@ԀJȓKqrn D*2o'+'yR:v= + gQX{g֍" .6!yj mtSL4-M`5Ɩ\KNὒ,Q+et‰+GÇq\MBn9gϲѴ޹B1~FztapekO E[vDxREDg=Iο^T"kl'αgaT!׃;y==hCkk=DK΋onNb,\Q9om. FO7Ns;ZE=èeUJLOt*:-V&,/8|l660]|6p5CLϧ:TW52t$y t>5a0N0?>RL|Dy{WǗgO{<|?~7_agߞ| 8"b8^%Aɍ%J%??=9 fR>w<~y4[߮ |!BL]:۫.(V$bWf[RT\PY# IIҨqPs[*G @ҲT0RW`2H|!^ZY؈C%QTn=P)6X}b»U7_\Љ%Jyqﺧj)g0Ϛ6w`؁%u}{slYsGgGͦȳ|' ;yy(?+xt?ڿ{>~:n&Y\mVSjOf*V$N^$TVC&gu}G΢ ug=|!/"E[T =zfkSȋby2.-65f`vIgʆA'P.n G憚;LoG'Ml>*řO荃Te`濎? \)cר-*e\evWY=W% \`\eqU>zphzvL誝`誝\ U+پ{\S:+WպD n  D,RpՂ:W+F)%  b͕%呁GW.zpp%txќS}0q90 u50oFt'2ggޅ\r=Wh|"+O"Guo7́ٻe i>7ˌUfMq楪QR&oZdI~ y>=ctg &o䳕=[30WJ!o?|%wJ, QW^kfE4[0sae"*R*p.9FVK󌽕װCZą9#hSq *"x]E-_tNC Ӏd,, NreUYRW<\LBA]l,)-)*.6T<.6g`qp%*Z Z RW+@RPa,rAVViz\"v5~9@׃[ g\Ύ>.m媷q-`n *L'Io>\ͣKsZaGcʥ&)z-,,W j:J{L PFeAB\ery\eKs=W4pBUˊUVyqU qe@jVRg^,+UV-]_cF㪕UU;r[ک}pJ" B \AuMO \]Uu)j5:PzW;g=҂piAU+Y)j :P% F Ult1B|[UV+uqUq'  @U+x)j:P%#JMIA|^qbƮZm+!ǮW&hd3 ?pX1 WҬ\|KI8-)̂u9`#6tVc:!bZI ;W(Z \jJUV+:?ŐUvW;Rh(W(XrfD\UCVkdq*WF] mWYpA3FG?{{jYqU.t\jVW3N-rv*Ep[ZTI \`vr(W:J`=W^2wss8a?w"x/'K x;OPcwYRJZYv޲\Ȃhu6 si7G!Qwtn[ee:2Jyў'.Z4jQLƸ`uL:!m6rB4&[WG/9}uT7FYPe㋋?鵝8_}kۿtO;g5Niv,E=_νϠPg^oV>.`{|.+9K!%eu$)dI;(XrFw\Jj9NVٵѝ \`;,C)jUqUjqŵԤ,XbpU-f?BV* 1=WMKUYr *GWY%Cĕ$ϬGQ+z_&l6ђ`Ê4R̊u>*9>@L+Y,l%*WYn9cvY*4!F.W(r,WRp՚ίwG׃+@Ӌc]`誕`k'Wys;:_JIvh+j]S_+,vx[r9)WYNoR:D\1 I+~Ql;\e׺ʮ3j'* K!UUDuWY%=W)yABbptsFjM;ryCW`p%ƋUE)jJ JʹX6 Um$!F[@&2~>^cE5Y* y ?/- v_:-{epaE)k`h|RG 0+>UYݯܬ-ykdA}8T6/>>'Z'5Qxan١2B[/Ųvqy ~p4\4_gKc΅aď[%ft#f矿yDkT`' ӱcRO-ęhI{ NJ .HHHR[*=-UN70Ib1^7 61ikW@C(sJ,d0<8c4H !ol\kS98H0NQ,Bj=$ e ` @yR$HMmG¦fRP^5 H@ m 5A)tű&rT$!H͘c$4X6( E.*K5(zU $$Iw^KJD 1Z> !ʰ*TQ]k?a\B །/7><ń~)4c6⤡1lb"pF1Op0._ެ<ɮwA|yT]!o|`AB !0xേT^ҏ[ JAG<$]]BAh0CMZe :_8g`jI=4E"eLԴbPyU(}ڄL0T~ƣ4Sa^B&Hx|IAZ[fHA@8o=UBNU~T}%*yFU;+Qwu2W[Ra3 FMB=A= R>EG90(T%@_>0LPC҃k ) f L`=G6A@h'{!H;t ԁ]BZ);$}` Q5Q,Z, 59=J?$cyМH&e#KrصF$ fJoȂ&J˱d~ȃ\ 888"*p(X8*BHr}LJd PB`6v"ǢMPk> %ۀC`%͐qV4pF+ j3= o9XXP) ["\^vVH"tU}AM-J`lRihۻG;h=ۼZt/`,\:r&@E ԭ# `ih^\G4ߑPp^dQfk[XhQ!0Dz]zx31ieyɯoyT^"aPC^HmKSA5=F=-LGr]gT*224Q9&Ti SRx;n#3l]`?v;?+ ")U1A k w(G >4OB٢ RT"3w:Hc[ t?N$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I N}I oZJyi' :4ڠ&IY&P>\$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I NqI}>WՔznP?sf~WWB Q8[ .EZo'R^R+.!W]M/nVl+t|y{U%fcgѯU},+Գ|{ǚnpvhUtw!zBެ.=a.d̐|7'3 zD->ͼ=*Jyx]ON3J<6~[Sj!72;>O;Ur?t}lu0]͐9dԍ([9yf}i|#,wZYٓ[y"jN$"S[ޙ _z,"VNUM&6d pXlM+[y%D&JŞ Ҍ Z+v3xeh ȕcѕAH ]Z=y"3]͐l?+v"uEh?PFt5CBGi@r!ʥXԈ K`aiZD8Yz,;k P?۷+Jt./5߮W3-Mgyqo;[C?ZK75㉽zn Yͯ?}ίf򻃐f/,W^[|w[.Vj^/>Wg'Û@-ܒy!U @=t 6h[UA7{Ώnm_CAAk+תo/J.yn) )T_Qz}nH&,P*n~!ߚncgg]~whwvnkݍwjlG6j|x{ e#9T`ߎZ#=@Z#;}|Z Ζ :ь$LuLW3!"@+=u~Z+B׾ lx.PAF']8Aơ'2P11#*2]}j+a{8f phS+B% J o}K u"z(A ҕRK] +{^F5jtE( 銢v%`ۡ++КAB# v^e0pZڗq2i6%9;BkiPz4=CJI]\Q6P+sT!" :qhe99UT!8]pT"QB&OWQE)՗AWv)s>w.vfp4ܓh4ړ]Fd sLW҈Z+4CWF=#QJ,ЕV]R\-t+tEh:]J뙮HWN8tn}M`7cw>pz4moO܁S|h\smd/( 89Cl㜵-kNmQ^(ݐ`+D3؊&f3k3 [+5CWV:;u"gHWf7ʆ onF9y"*2]͐Pλ@ 2.:%Pݷsֶ NLpi QSiB̳i!" ̏Ch:]Jt5G ^}8v (Pjt5C:6DWZb܆6Z3y"; !LWӕ|fsᣕ8lqpo> ]BFTFNLW2hkMCtNfJ ]ZNWҕ Tmc.oe{z}Whs}޵5u#鿢JXhqS#0 (o+W] ;1]?K7E;oo恜 eQ@<Տ'7CҤgF|57??dЍ/](tYGc:FLSAZԑG4^:Z_op'VaJ]qnxv?O/,eRh @'=l P Cl% $g&}k.Ydr~/fbG9;RѿKZ~]kOoF#^,~߳G9om{rU9nWG~>NXծ:ni>ߎq?/|_+SJ8N5BJɾp=TFϦ1L[i+%{DEǂi`􂹖'KT1*ʕ1m$M=1ɮ>Pv[h~n;)Fonuc-ww;[nêQ81)g 7¿lsZÃU3k-*.~Og\lv=eҾ?\xSEA֊fzm1q=Rb%C> =[H[6w}Ok g )\'g.W6 rT@x1J-cw0CcT݋O9We9}|_Kz!?i;*t~0<2`_2֛uew- ;k]z< Z%S*#ϘepEe^{gU[q6&WQXb}0[y)Pm KZK(bE%p\|Fg3.W-s](:;k^o~wN|non4~?Z5$YimqonF3{W}X!n_,K~1حo`ݽ|~q[柯Ykf_yth*ŰC=V2}d$azY󿺐Evl.X`ڙ_vN(Stw&XEL ݏqѲ/1j|lߝ<](ҢѶs~7fMt HuTIYF@^Hj)'l"ǖuFZ@dy'\trDP1DYD`CBu46JxC9oh*:}?ķ5.*Ya|ƽi^K^;~/ 6 03 #*J& c.{t ߢ$ 8O1YTP RP|"]LA&HJsLV  j3rTz[=݄Rtn]J܊6o'J>W=ww@Plz<ئJj}0}awYk8awTxd-۩{Z ֻ!(q,6RY2 TH.ʔ58[r.16@6e]),eFjflg4Ӆ8P]u<'Յ+jf723ٻlF٪E#6㏓F |ח mS2t 钵j/&a6?xHtr5q c<S#߁WvA;-vwQ6BJ 7y (-ӷ2ٺdɃCIB?UcA0HQsk9RJh*Ƀ. SUI/ +Z)u2*P1"kҌ%x;x<+["8޽Xz (7c9܎+N򙐥)suo[|9IU.es;RT '=@wd/ rctr>U7,&4u.Nm\L0XL* lh;BFQJ_!j|{ίbs?Цq 3q<7gahgwVE<51A&WnS dқ|oP/j^ a6O &kH !Tn7I&K2K*z"-]*vԭajbˆΫ+ʃy>j)w3leݶZu(8aGxBZq <˃*VZ=n{GNҦcY쮺α'6V_itǥݼy8GG*]+\Ɓ_W޸Gk%wCulov^G*^x"d}IknXseǽlKnY)~:y%Ü=ϕA t`3i>U>.ϗLF|Hp*-iU8_0}vѧ!-8{B $$f/QxNPq@J=T29\nǛ5 g#砧J{@deО 3At=L%ؿz\ .nt=Lܱ_St=z=.ܝ\9~oŊTxe.̝ >=&Ɨ<>xPG4[Jv#u;( &uTFj1@6jRH%GP)T$)@2)ڶ)6g-"PƵASqH.z!Vz5F *@Z7#gC(}Mc-o SũQ הv2j^) i4ccvyEqU~W V/F)O2+RIJX},?'m&CHk"'9;*ucq$KXJb%I.BrM,JFkīZBd+,IUc9kFΞrէ߁:~Q}D cYirΉVb6r&$UT3œ.kpŢN"RnJCO?3m{Sl($-L V󢐓Vj,hGdj53lxI'|jHEX" J S$jzr{ʬOX5AE~+ɥ N+] P1M1\yk#P,IyD _0ӌEa}/+VWUarl*RhdSrJ#%AdK.1vTw6Cq^F 38y\달N佀#zC-NqBk,EPpQBSR^g r,,)-3 2i挖PI:aLx\˲fPb ȲEaR<0HUF#c, :rY6㉗rAjWmV t=bp9꧹x8I|FKyUgl(CevEey&ewѝ8"ܥNk2JAī QTe"54]!Z_F't*0L$ATg$Q聁(9!pFi[ٗ7S^p>):R>̤(VyY:D T)*gU04Qœf_XnyobRIm-"Rdg Ut>ٻ6dW#ZN ؇CcTHʶ ~D]4Fes隞ꯪ:Z3J:&A4UxŶ)F=}7!H$^).x0#Xf8@ꬕ.#G_YIxF\wH M|CjBʊYq,*2sʠ 3}| L)"Ъ;[Tn-W5cTq薌W.cU('Bb.2ytFKD>:釸Ie =&B{z!}/ci iUg^wօ/vgP|&w 0nn܁at%7`E\xo*K X #xR}~LvV 7%#JCˍc !d YmRPiܒMg!СwmڸЦ;֖Jq @Pr]p(' g̓ZZŘD+4UQF2a?h/cADƴyqu#bPD]P]Ȣ6D|d!gRB'cBd$)jSj@4t*Leq} P[9/IJ aB%s$@EiKK9s`AWtWdWuۣtޕHk/Awi3&L{&ǜ"j1z)YQ׿z&*4NxɕuUJFrѷBMV'{iq4Ę:Ȕ6.2jg4,LK9q9 irJn^UPwNȈĸ#AZRfe G<7)G2iw:"!wNTC'SwZB>uSit[knzGűGGτvi/R^"%qhE; URJ_0R̕`ֺ0=2z 2Y'ȭ^aija{547@kJMn+¯H.ߏ ʣXx 4tƃ9WVO"TOW$ZizwӸSh 8v{Xg<;?Gi.\Z;B{p\l{LH75+zB.]ڮ7#*%g}W^ *! @5hI_eDʒ[DTmėa#.x)dH 5 [x ;k|֛}̾F_װX:H0۬r#4`4V-(O4[(+pWyQL̃c>U>u>YAH: c8ᅔ`PFg 3:2ZYpK ܱOtneR*u)/۞62e s! F&h95r?~eZu#m|jFy:߼_63h8%g8}3AQ_~q\9c͂;i$}4g USBeUFdcso_J~.vnQ^W5_0r"&^Eg A$$AK9z%G9r}AT/ + puۜwaOg?[Vzyԯ{w>b3Xz1& 7RSPwqN䙃F؉F(d^1ז2gmyA6 rPR:BYN @X֪b!F0)c,e1R(NI ;_HUb";.*k{d軤d4]7ҏ3B&Gv߭tioV6ܳj]?*wD)g_se 8@B&V=u uf NYg&gF{R"hZXB/h80tQEm[vC7'%Ԇ/<-@5i8&a1c +KdCo|)uIG)`-F)wa1jj|uZW O'i:.'{Bx1Y)wE!`0My5D ]GW';dk 3XMlA2j11x81?[#h8?>{arry&k`a~Tl#'U4[{yEyV"R<4>w mґ-^4i٩kD}Aȱ"D[Fdqz^NӧHN]^0u4L>Q@V:`~[2{ ts3x>iz u15It_{9?kl^cOu7l Bj~(!' Byo %5]_t I`r2Z&uæ(ȿ'`Ƙ2(gr_k(}lev>vTgᵔi@\_7P4=6ߝNsv+8?.ǫ =߯MpU}zRUWTVLi'E():-; ]i%-' 'q$,Jj`LֺEZDJ,Z(ȹh׮ ʡUc:sTEg}*z=ځV e9 rL{ONWPmy:1ocfu~j`9٣2hAU(nbzLxUi0u"$W?VI1U@k@^yXp&}7%ېn+v@-ii/zg[Nj0gB91"A`sP%g%Q f1LH`}J}phT$>J׳|/M*RdHMN*j&  pC20 5]AfZa:gd:+UWdr)x־>Ӛ'>xf oV ]M`4pn):l*`9 1>X p0dt?}}?Na~Gi5M` #1`z>gQ\x?$kTtq$C8dP1w&H=7?:ߛI쥑VU9X]`VĬf05\i"p٭pP__&.8pqT{?H}Ơ7O.=˯*0Mdu GrAi/m?nJpSM<͆A;_~(}woOъ9HX8lEo!3|7_?lmho5Dڴ {\d07Ukcv.Z|?wpJ$*f;/VO*[ ni^^2X.dgd/nQSuM]pk}ـ]fPod R"1q:1wvfkNm$Fb蝏"̼d5IɘZCc -[z驝 +WǧAm(5pʗQF2 ' Gi*ɲTM,~O=Nŷ[sh7(o=fg;oع%bXՇjK"}X&pak[ i]}Qh}\3BoI7ߔ?ٻLSQN&lqcBb_4=\@~?Zdkoɱ O!ȱE6`q`UfL /^;A!DӤHvȊ]OWߪa*%o).jepPVC9t u;/l7cfHa5n{2feZBWrj=7 {!lX\׽Օ'o PFxkCM;l:cS dƫ,-$$)r,fQ?\ l:ulKI U.MnQ}vnE,?0#n9mw)tPJq 佟O_YcCQ$G=LՊtЛNͺiv@Litx'MZ<˒,R$1,e"jFރ+˪F4)d2`tɲ\L+ X!Vڍ9KnJ+3≨J0'MR\2 /ABFs_L m+pγDV%6-ZU3!QVik.bܓ.4F>8ľ!`h@&\\5w\J#.W)jBR+I(B P) @\) 5颪)krcbA:Pn0u8`{l=A-#wLJw]"5疆UJ5w\Ȉᚲp- W `P6 Tr.W0Jz,&\ZNP%hz> ~ N)973@uS+tMl;G\kzNy@B"e*\ZE|lG\] W Q P.&B{\ւ@KHHA,h0B'QeEJHE W p+O \Z=PW+)-`#`5&\ZqddqqBy_&z_KqM̀ri4BPnŕ akNTrӨV1*@Lk  k P fs%tStqu92Ҝ+ WVx"*@\Y}Ƨ, ,1Z6Jt4I/ݎ+'u=n7LT nWM!  P5 2}<q VQڿ+2J [wxcU^4K$7uk9i~N.ky07vx $5KvLMRe9 ʀE@ClU0ClR}bJhԌ 6+˂U Uqu &V( WV0q*@\I+! +=}RNj5~{dqq!{>)qȀq% \N}0ZSbS^rޛN7]ގo),^5>Pr;ZL&կ"5SYWCj*GA> Kq?&i/_{ͲM?֥r|&&zwMf̭YV*٧i~yC;\R&uCi\i.J_U-b(gyu~ZJWè[iwQI^&y:g+O$%d5m#Gt޿ /u"v}8哻ɸ0Hx̗Xֿa찘MF/Xzl^zۿwoC@_.^\VEI+tbΒ\LLkw&GǗ(ˏ/-4'6ey< ^M?%nre@ސNe͆~Iflm<!Od9-cG5Od.w jaNk|Tl=bLE{}~9}8n1K1 u6~ΰ+`>+V];KҢYHyi5ϻ1H{b^;vy0_1Mk.<~޲C|q.G#ho.M=B|}LVurlH۪mʙ~RR(e%ת߫?w~?RCq*.ܰ`;պ&=SCo^et|en5Bczدx3k43kηXWgg*F %R@1…y1/:t>W5^ T<Ůɴ-\I!1'"(TKVZ#X*N<8 ͱlf$;t̉,h-^^n7?Vo,ֲux3͗v:hkxt"X.Oi!naǰP DsL6;Q4O 9gjj;Z>ҞqƅkBefJ<㲩-FIaHIS8*z&2)ߙ,f;ȓ|1|'% HOѿܹVW 7y6kP"e9hChĸ}Mny&Zizp,xYCUǶj# lgӟYv&bo ey &ἕMmd~cz7UA}m3χP"<%3ZjQ8MMn}&@򇞣;z38NZg!qha CV'>znBKw>L.R-aSҹ{L!%o?׫+>!TWveӘCoi냫+gw^<Ҽ(K)Uh)T^$eUoU)Y٣ãݫ\nZi'r({83M>ĦӔrsS䢒\VtfRӬgLZS5*rxHu;C[t8j(0_th?WN ?$ 2g>~]_].+s\2FH]&m*}?yO<4zf@`t(mեa&SBs<+/KW ɩs% "ȋG{2%eWw+^_T%l0jn fuLumF(qrv>覞gw 0/q2^P:f|ʥ[_bqV><cqGsJ,zT0KE## Bc^IY4g᫛ʁ ?)hY Ւ}la:+$+6UmZ+`9_HuVR:^|a>u FQ`CIB^9KulMah2xew1 qdd'5i e":'q+ױ硝S`c !Nnj#nmN`ƿNMQ8;q26qKsf[3IF8Uu)K {ZP%Clehጓ9iE'# g[dJG+32nYem9 G֢hr[W>ޛ9x MoWxw^=ߵx(4OLc2<8W:m:'_.J1rE[G0<}ü9fnct=U놉5+"hZ*+i4 ɬ3ЦJiTj4~.^*`okg8|ܩ>~9"u߃j̘ *U zOQ*E*-S!&UZ1@AJ&Jd"R@~֦9'Wvwկ˜SdAhQSM =l.ui2Mg$P`}s d;ؚw,!;>L1TS d1K 3@ kg2=0dLy3hS  %^.zKZX̩Sa̜K`;n Β'|v1 * qJT=L:YS-d?MՇek ɭ\P$ǂʱh{C7ř߆nnjU\ %q,QbGn<'AƬ3D0amqƀ6k2D0J|hS.{ IEswlr4Ѩ9e^F]"LL@f䲖x8uSl0QʸpwD )ٞN0=n8~Y]NcBC"ĉc;s|Pr˶dSHَ_+v"s?I[%fe) Kr`Z:<ř&*%-†)16P7*1si$,]\c/;Yt#x*^K*7O‹i! d04(֎27dvRvq -*,)A+0. }VdžWh>]V0z*[%O'hgrU72apľb1%VߒU>?\;`7\W/IJ:s/D(? ߂H%$(fxDC@GmcؔFHYIm0iھ pg_y)o̸* 0-XgBW" xU4 _)E *)K_سaq&+]AQ5o:ڧy<$# Ѽm6E=` ai?RՠP3Ralc? ) 5W jp`U |2h+<}^\ Y٨ݑz5K&l' `#B}0Q[ CX;nP;Hޝ)R y*9{y`Z7iԆ̀P,Lɛz"<= 9S4 " /xEq(Jh#p% ]8LH ߷|8B@q(ŴȀe~"Dpf f$8lY7;+TL4{C QEzŻ^jfT`k#Rz@j}ɬ`_`uߕ[іenr<}2F8(1ȦכX8Jx^a\HAy ƫ-YaއeOck +,Xx #>^a~QBc+N 8r" TÀѪI:5䤀GAL`?]pXjNӔ3hL"`t3fKe)WFg/ȱ !hqm6SL(O33%!G&B, Amh>YOY,pA'ŐDxc7 Spٛ]]ǹOA2z;xx1JU(H^ݣ莦4\4u/KCkoxsjR9*\R܋TbJ;:E%9Jq2} _>X>U Bz|Q+ڬ ^ W{<*H S䙏a2=MUJ#& 0ihm!0C$*oB9YͲ& ^!Pʓ(uqL1!exQwx+Q䍻@~HګL39% q ulTrZo[0 C9o>Xg6K DQoEV'uf B(fl{2hZ\/9,YɋȒ2Se& ?Ơ/ҍ˨%awwxx ._)JMT|^glH'1Aݕ<(YD)#^*#c#%+D(tG{0-ЉnBY ̈˵vA`M㇇}ڏ/ 5OJʦo7r_C_"5r_OCʀbvDY.dWd)R i$`]&2h }+w9xtsO #$NPשk} .hndz9(6^gir3D-JS. `*3daoۏ_\sCD\Ψ嵰HDmah=$7m=\H$p<%`Erٍ̟U){ ĠLG P牀f%'gVrr>>ӒU1]pnˎoLB2 S XWVHrn- BS\:7O2aR,N$@ d:OLd9MY[/=N.S\265?Fd*IQğh]#d:.\'$UBSi*} AD Xk \)z1ƕ.]~* oJAZGt)p4X|cM. RYXbj]`@db(N~{ZTJC\0s I~^ Rdۏyl|H\4\7(Z hMB3N1>O|#Wd6{6)m8#ZDqRy'dS̬<$"qv=pԙ r, 5@PjR&eBA&ShdRc6TǁPKҾFZ2 cvK+1J;8+ #2a$$!@$ˁ Y1myN+\rH-Fn >j'RìUZ PfH3m9$`i4WK>wd\˜eZkZJ/a:КeQLS^RqLep82Dd(JE:NT?< qgX}<<uf$ (fLppm**mv [bi,/7!t枛p35NX1?~ʿ70`@1H*퇟7CʹH>9ʹ6pE*TBv~h[.*ujXBe'Qھu䤂uS&Pac0r;;.7-k=q'Og8_YU]:~_0cDiM|:3)x0rkjJ%?Fwoo#S޿30e(l_ 7HQg_agY3%9k:Or- mjuvr,{rZpQ{?< ;q'伲Tļlarߑa:A0HQ̒+y2VOPLw0͠4dqR )Єz ޔlXŭ4gmVP.9U@  =Z.ʟZ]p]so ~WY9,b񊩓+?%uBPD%cCwI(8[=hsd(6~qu-H֮z^TZjq8exgNXcn֩.0LFѭ`$Ss?A3稇mVc%4Lя;%}(!{ܜYgnҸ:TDc;yR7贔t5P7eƥ_V/YK kjK3H{t҂\AtFH&)q9#,CC,AKrh8N Ũ`NT5-r ?b^dN?](>(N1 `m5w[$l!wͧ뽮F, nFM 9c^AD G2$Ve vx`H:_1@ՑhXN2 +e hVW;ş /9?O_l4/W^/ꇇ6>LJ%RJ;.E)14zE8l>Kuh8b*- #+k|dp$(쿸xAE"{ǻf{wwbjݳJVzR& @TQeD ޔ6^'i+ 19U~+ nhMRIAtgR[)6с?SR=^c!D < wh=|RaaY 򱡓ZQ2Mĥ@}J7K&;G!"wt˛r C@Jg@ƔPa>HG Omë'6I}!_lԝs&a`b^&^+0@ZrB˒6>8ВwENgRQH*NfRu 1ޒwhG7l^ д+e̋NrNkќњM. V+Т\KkmՐ)rL#Q y9U" M -/XC տY/ 0L?tR{J˶8.ʺ H8EmIR\"evXaqͶ$6)b$#)(J}DPʹlnm{VmRr8ն+>/*#;hRND$r!hػq%W8~}\b!QRķKo)qٖl$eE`=a*UBLqNf X8P}uuS\:JwGzp$rչ\rwǡ!&e,"&Iy0DZ0KF #gr$hpH2dE89"YMgOeY|H%D˯p|KH+Wl<*a*\)!߀,M+f(ϯ>R`dg'm9e!CY}BMX8ň" L*8{$_A`%GË X [ q4e2:-PCL: #4؜OOY.#uI:Z^8S(/"Yƒ4ynmyVYOL`Y4;QI*\*fpH7uA0'&ӣ: lM8(ӊmv+QapIA p-'dĊZ(ֲpV10*x&W`/ޙ:VxW!]fk LFH 9cI }L% f̺NCaZ 4Z5l[T8/֮RXqOX_Pnrf A%l20XkEtF@միȅ_sQ-1Rp^v l1Z P (2REr;T@mm |ňH3ɢ\T@*e+\$-\RKjVgx'ϴ@NNn,X`{?EAu:x[E_{6֗\VD$VPն`vhK] L(P8) m2 61M[fG!hh30cdtp8+bgBiS ^ktt {N^`3a; }Cv›ܮ~/tO=\|P.eZT/&:p08(q)8qu UkH)2ȋߩ9z25#/?ܿq5F1×YQ3GC꾔c_;`h!ȱsjqy\߼Wk6Y *ːg[iN0pU.G+?نJ#X_ɏ՛YDMpjx1\@=oEs / d"PR5Z>P%9o g$`dӃ!4>x-E2PB嫩:ʖf*^18*}Zt*1U[[Whg#'# T4Z!|h8u:1ĥ;CDK{'OdM'U~lN)r,fQDuYu&mB4̀"E(9g DjQbvg_E,LVa\(7 Խx( hVUZw*HqtT~`\-& @h Tݯw/t*_ ke='F䷶ u<k_Hf}.R 4YT`=ҲE:Z 2FC2/13w95pQ]8RW@c!9aM|g(XmB;e ~IcEǛ`Z̋r:J$<s.fthS`lxD\b т2g0){*]4ī46/D+e디68AI\VX}lё3T^vTsdžݔs5!K_-ԝ&5G5ǖD籈] W@8)AL#qQFzx=r> м'ɰGxВ=ƹM5Gʐzk$36; yvXGÕ|^ HUW(q8߯ `tְ4L7fWQtQЭ7j(sbؕ,'}W_f"4e8&2a*y81{MfyVZ`1h\kɜ3zlWH!2-JgO*NV%ps1KhסoFY*7]vTYv`i6rڄG?X} Ƅf5KIG!-AFf ?W; 4FXo 8tAMa{7<" gRfm~o:)rNOn1US0>,R4uӇp!a 0|**d箭>&<܂%;?%Y%j~#ɔ齗B^燝&2 &E]E,㨨6a=.D#uJ}*CǢMjrh|0#}ANSq[n1((G@i&h-4({M'TG+0skUA<U9iߣwHڋtnWvCWGôO}ScQ y,)ȫ2"vT%bBE).M1AO7/ꞄmwLF}ߒ 73.͕Wݹ7ëgG|*ѕ8:;q,.ꎯ" c>&c,ST^e-8Bw|*DM|dB+݂uP.a,\Jꈶs$EtCj+FIħ/)bD<ɖ PaJZüyF놛7_|rKcRI{vn~;>GDnmoe0gls}3*91g%M"QJǶv|mV65^_iMEY=oS-#w l]tJyd z-»cWFvjUmxc-D ob1<Ν2xnD%@IEg 4< ݏ-ĩQ-dxc, Qb zQ4a$[@WZf8݉W,FHc.Ӓ d)`oO W~kRcck.pNYgLZ.L՗SZTfe 9Ialp`]7_BV$H=`3X0(Et,9_}OI =h>RwEGcnwӃwDT{_̯THVK ~}W):VM v:;>+["3,~l/Ew*")1J da4!e!߀QhʿWqSc0יE@}w؇JnPJB m29 ņ:b_6`h VTu* ws|N1Mym(^άIeԌ*>"Xa2js$oV4E/(u l`Ќ=N7_P&o~uB<*ːg[3@lw@MYh;ɚ=/27lu b3lL'-jQ650;%:7]ZMלlfz{ ]X[0ϼpeOTTۣG9]Ys{*+s)n c|]yqJ󰎐^U-TJqdORFU`֍8H2A(D^]qpph_*^AcFodbn2-%&2IrUg8 uC~khdC#y0njzLޛ#& P}rx;#zOlb?\sEZ,6%`;ՙ o`K)UuJJ$| q,Ed(n 7-4Щ`wlʁjsڱd4gQ~YW\cQ^ E.ar.e,Is֢;mV"6`DtlކNuh jn$!{ 4dv3LJ>(c1S[h K8)2jr:Y,fRhgxNҲCTAo1m5r80'H%RC3Dl8[8O$MRI ZA#w(Ņs|EIXMsDYOD]HUˌ79IG8ospa >z@,}p椣VJ&>C< @:Mru@ߑ m!UØ>8GKmVex; IUnGW'}v J Ԙ$F\R| b6\%`|Rn2԰^{VsALF͞] 2k3QhOVS^蛇Ddl@C>O9A5qs>Z^{2[KNd72˥}|?v<8YCڜT}pc7 Eʱs[m'n3JXD[Kn$^ff/BVG3ه#] %nX͑|8e? 8Hjo_ |9Ck 9qF_[YEtr@¨%7+XHQʚ}Zy옘*l?y]~ ֑xZ񬻦8S>re뤢Ka= @9aW8 *\*1t/Ng\K=MƹuC'+% _i/ed0FllB 8bhA50qVL$֖ZKz@3+:>YÜ0k. !_ou,nq(0m5P : !@uov9G_x0%$'n/"ąX}ϻn^jq$ʍSETY90pQX`30z#{Ktdwf3r5Y&?9,L\Af\ySCTl 'Ӭ'r33v 5_8k?lo1 E+21(C VZ&ᨁH))/COIVB3kg Qņk3i9SSwi;i,3>>9)893B,9cR٬QD%r,kiuJs W3I-q÷4>Z牿Y'5ŷ>{"ձ*+Tֈg)>pKIHT2黵*7 rS7p2ށl!urLbr W'׷yem>FRT7>)ApS92aŭ sh!UwӁڀ#[0O//OSwu y)"pē,GFi58i9ZT/d* ZRyT&j5/~<ٻ6,U;;ٲ> l 0 ,QD(lj6(1XUիzWFc%I2HCtCdMEnhJf'%#uvn SqBgj8X0 ZՉAFM m 1g b 28 ,&+ $2d,.1u?=T|R ]\^&U9%[O`$m&M f)XK!H{5dWG2 q8(.or/32]y؀q`L5ȂEh8o]b]_Ԙ꿞O5GV?NuyHS-s9 tۙ;P?~:po~\Rӽ>>~ HZ7;?@*2#92#Ȩ~2Aҩh\EnIR 88bXi/8X #mF5WiΦ_>N&@4輗=5z4`$8p fB^^9f~0 Bcr.ےx֎ojZ1Y0{44.Hc}ӻ[jX7]Qxi >`g>izr4DKS{??S{??OU>ght,ZCD&~b,IIJ$4Vp+]5sM9p7:W0 ~`szpXzJlᕵȔrv|L- E $cF/d|n5m-M䴔 _jD[ .KqVX** ܬofЇVmŜ\%[]b??Q̟3Oaބ#t^^cW+FNہ+C(q/\)J3J.nnKM_ 5}qѹgROr4kGV8T>_ǙV-PP+2]뵍5:#*XDuL|x(!^׎JW5!1Tjl=܅-n[~Z-vn.d(Ippt)]߃gpn:+#|qqjŚ#ܱ)<63$'#b81:xyr 9!K_Cna9ܞRcA:\OAn%?a淪ϵV?WkaURx,0:5 ;AREDHl '$Q`I[ 5jynuʌ]E`KdD6Ic[f⺝[kӭ'b<ҲAGV^ů fJr6ߏKRYsY Y jVšE!c^$G (3&PdA֑EOm@lMNaN*X9X`PRkӉbj@%w+d$`1q clY(iCye5F}z׮dXs0xDrvdXz=dMbRU6T` K'*'** 79L8 b_Ǚ `C5`^`o~B7׵@S? .=n64h`"Yb+-ʆ&K m>(>Mt"̘cg~2E;"eP ?XZȍR {9 19e4 AۮFrYoIK2D+lFV;0~c=Ik2H7ݜVju 8ǹd9㐌+11x㱶Ψ"\GjZm@SV5 U_lMcӆp7kvq!s?Fq!8g"=F\KD YaN6k,1V+|/ ޮ5p ӶXU H%=;G6 $^N<ߧms'ݲтZ%km^sh@Bei;U\ר,Xd)Q๨+K"4hF\Q}ccF 1+t%SU3.[n@%c(J_#)ya(F=q^x3Ѓa4vn+PwRHq)9~U T?̾-+!K3'i~7e7BXP{CEHmD:9h4i5M 2k*)8$Jގ Jv@lF=SC$[(av\FNZuH%gAU$ہ*Ǫޠ7NOׇ~fEuK[UQ-g|sR$6<8$Y";Qc>Jv7^?uݵ[Ȫra%$``Qg`Vl*th5iܕ2H/9%8M-vwHW;DElaplJV\.=%om^j_ftz<`Rlx2#u FpQxy8\Sr"y $AB^OG1;%`9h iޢ7gAW9793M r4O\f܀<(a+b FO]3-R1|^",4'0"]!弪'X-vTʗY^墓 cm].d6܎Iw>L e{wR;y.7M-V'.c#le{?6NRjg#ApJ F2WQ.>ţ}`,%"H~C`QMKqca-tio{`Ϙ#")gHQj3,4hƊ|j$97+#h2%hʻjDj]:Y|v ʐ؎VєV=xP-V'V<94o{Z(pd)zl 4¢u0y6S0"9EAPz27"?Hc5tn1匬[jv5 6=mȓy3F #UH  MU8 ITOON Ro?l(\a./mO5:X5ݭ?= B#,0hRR#7YO'j~]vQ'O4Hr0 k0JE¡HQ;WK!wP 8H+#0,Jp: :1QA WZN2瘋dw(o4ܦAhȻrV5N)0*e`Y4 .1Nc(m-HRȦIm N*y퍈+($S[SM)W~A (wuii,R|LjUì #",0 6B, #wQʔQl$VHb;i /OV5hx^⪑xhH`EÄR6;d1,dCĐ0f+,H]K{8<,0of5mY)?Oljx#AN9s-f fi4qTX:(Y ؙ 1 *EEi6Z0ՄKEIKIx2/,;/$a-C'8)~k U6p[8cUԸiZP]ܜ39OHa 8!`!k5L@m hp[P47(D * p9HsgLqogx h!kBk|betNoG9lt$u\]PrBB wIigOBZGm[ݫ9 5kpTbr~LSу8Vto&X;+a%jdgɤkILH?V' bI b+; W*@dk;bti \Aq^8eؤhء&!j)gp~ʝdzt2bޞ: 3wfA`0Zl9MWمSZ`Xjm]֧NV>~hQ2m^mPʢQ[1F/t;֬115ךk1F ]AHKEF IO?5ui ܋UbM`k/ӣos|l5E)c70+th?x9&.ɖ{gp7 Cd%^';t69eh\¢->|VMmM(ypڵ׬#mLVWM^8=HG!/)wa'[av.Ʒ!2VjvJ5VZzuE)^Ӭ"_W~^)-Ng.oQ0$J8FXQ79y&]:M\,cmѯq>D;2q4Br6e5%Y3'Sxsr4W)2C_M'IOZWքF`JuTq]8(͌T017G}* / >۰_~zHusȬ5̺ Ŭ{fs[t`ܸ}֒YSD `#_a4HJ$^_|ZΗPd,ՑG-^F]1 !K/kǬ5 8P^XG݊ r8e֬e VH٧6@&0&y;ˡXnn095A~ 570ЊXj'DԬ_oktq[ ٢)S|"Rf:?w,5k 3Jע5X-:>`97L8O^q֥Ӄ8/Hb Q+-#QKNx[MK8MR~ $A_zh|OEK?[/wgHH 84#E$Α.rıw:0KCά pyAţE%}t]跙]NG } {ɏ|5p19ӢGёxǸ(RSjT  jZR!;4#b٧cpT&k!Bכ>ŕccEuQ^76q͞q*RPl  56[inrV9R®NwlR@\YkŮMVTiYv=7s B^a(FRq+!\Ev͛TX۵b68[]d::9]j™ R?$Hř>QQw rxc%R;hƕ u@*ά/sҺvkWD{=^}o^kV#5@pw0o(^[1Dj#%>Ϯv]SovX#, ;l{E?J0,m(p1iM\^ç7[јyW_G 9}NEM.gg Pg]kor+>G2Ow7A n-~Cp$YdDz6jᕦbԩbʆ1H|Ó}2>:)]xVN>0Ň?zpjۮdz9Bެu!^pPׯw0Wǂ{vp.]36fgfQ>4CR/o*J01xV/^W3Tl @26w(N]Tӿ!pNAE{vi+OQcK !U{j:- bc@&]u咉nH5j"w0ۚOt^J+E(z^eC5.GRPzSܻ ڥPc%FX3! |ga+o^DcW/^akɌS%Ju6c1fʂ.ŔjzRGf]kԖcgS nA3솣@/ү9lbL܍9B8m bjB[츧=8qOysn= 9G7 Gu8N^ 2G+b$LJ]xƫ#Yܗ''_ݧUTݧ,Ӳt=AN8.s:ڣ˽m2=. =9t[QbRʭDiP Ԑc&:Ҭ=U%JՙSDmrajh.C7h:m$0rpxn bC݌G>]]xqn=n&on/}!zM#,뵸ࣣ%{%\`o+x޳4hg9LP4LײX[s= N~/WMcߪTuϭNzM\_x=W>*7f/]% $f.~՗ͬd_znT_{tkߖJ7iN&2č x՚||:j?V'Ry} 0?u&s] G~K#m>Dcd:Ŷ?~:噱>>웋;NS}!7;tقz݂ |1pըGZ?]7^<~1߼=??Wt5& &wccv/ :`^-Hve:ʧ?Ro<|G#ejEUbX"u[@kuQ!^Bûþ!)c~?43heOAVιu^eQZ$Z,QP7q!{[6&l@;N戻{|%0 -:\jd91!rb| C_ {ƇrbգttT+TGSx2i;}5=J ~/&v>ԂYF2ar)'R͵|2s뽽 a r6csԋw~=^TӺWųch)2[R,t%H;h/YH ߚpgC#Cи/uvYe>J88d)Ehefu0-VԗNe^זT;@F9XǒZ 5MY: Vbby|,^j{K%rq)Ag[PduR`x$ReRJoҌΎ+Pt 2; .*`5gyl;14M~`ʨxShZ,GCd>xء@T+}ڒ){ bez]h*%302ȴYFܼK}V$ cY4{]P*͒YAAMrTQߦ :Ŭʐ`h2޽Lj 'зO#H:l^iZP17%$ْ,"c?Y__؜$Xa¸YPXJ])G7 ifɴMnF6֫@a(T[Ȕ*w;{$ yi4κLgjt?SϦ~ase6潓 n8ˋD-wT(F6QNrrfo%S ,ݍ yKk˲5EP*jETWq-A4glPJsŧq)ޛl)JQ)6KM]wbj0u|@صꥱ"@4 |`ńjk vk_7.6 ódxB#Rr+E#ftϹN]}zpkŇ$]h9J)C{} {5#U\t*`6O Ȱ6Yx!Xi!a Npᬐ:t#zW{׷dUaTp CI;4A{'xclmFI2F3.l3KiWx"FW7^o#ȮFGvّ},0%KTr!_je0QPjTtԛ#u:~!8y7g?a}~z[>(_7a .]L0ȸ:8:hHwIh)QњS*f.AX-$%c/:$[+dl͝kXOT)ufIYu7x. .A|ZpuLK &<8C)x:( td][oɒ+/}HKdf$O{ήm5GrP^fe}h6S#4t+2/.A%3kuu6ipu /2ɹ=uImO5fI*kxĸ@Uֱ>˩SVJ[ag!*,<G+$7d_*Tzn ! 6GN2|p, 9dMZ]T5Ve0V;h$ՄM}''HJv [}$M{܋=1mpLgrO+6=}, ?#ꔋnlGŊc5 FxOJDϽIA߅|[̝iXc7 ԋ.JdSA֜gntcp ) ָX9"r{LdxxۛWBۮL[qP oۛIZE!YA~; gnP!ݑ÷@sx'6X'jk{eT']/9kn(w^pE ~D GS)HMQfP il:[?+Atf=aUHU60R\Ed>m1gZj?0^CIQQ }Ek4གྷy%Հ[lkDu9uMEE 3gB q>'D3"_8MEН:D$V%B|NAȡj`Fj`G%;WN ThK ,e[P"pm*zJW!8P*"= vYEnsU褖1j2lZ;hlR"aZ-X{1ӭ!m g0HB߼zq*ϫUi_LO \u , R;tՁ >\şҡujZf ^PX H&k+Ѐ~pCI Vx e TJ!>8!U}:HhBh] TT< ՞~;(faO;QY_,ZyfNJ[HRT@mf30af9yEb%W Yk)v8؛%B-x4`½L#rǯƜmE3GY1qxYljb[5 u5bjσ[ -C1cXx/ywo ( H#Ex G>}Zю ]fo 'Jg 74 kޚy1e2@H;}tчx3o٣krx$ Q:EOFj'˗=\B %OC#I4<|MGZ$w?>)e{ioM>H__էKT⓬w?<99:~яcܓ=GOګ|xpR&l8^O."}|<2ɀwCw-}UR7ZO+ tGvG]71V>Ԥ4Jϕ٢jRվ}>o!l׊-X]:g{|enmɕ;UXy*гk$ue{yM3B )3ďUS4boYyVMTxZxU J 瓛 is\IpqɃ)%Y6v bn[Jp)L }҉9{EaSN(,Үfr *jm#k5ݥݓ+ۼP%z~1d<<6c2I6uH@U0yQNO*gPyqɃQ?>*6UKAg 4r ڻxͬxksM鱆sDǡP;-hWGuMN WtuHo>vR1)\N`A*CaT#WjRȧVjS$ܯD'鱍  }@Wjẅ́0,L@Xv~60bꔕ:?J~Hjw\a+ P;'jZ0;R}V焕}_jgdH\U rCRaJVj߬OJ~rjedKxbcd5\$oxv+\A ڹ`̚C*&D\dl\JdC"m_<;A5U9𺩄>k/r>جf9 Ǐ?_PQ&2ZsInS2{0Fg7^Sn"F;)[UM#ZW{ILf5Fu54iTӍ sQ[,|> M?BA߼[od󞛕|ON^j ]ON)@¯D9;X1iv%+| Aq9t豆T .]9j^Jw^ON|` סvyLD1.r7lVj%K|zŞ5[sK)D#$݆ &'=VۋYj `P}.Y`II`II`II`Imf\Y"ZB Z&]*^Ҋahe5k\C:4H^v,jamux N'dlĬt@HrMҢxmhbqelBe9Nvt=Zi Ji҈ ͺe!UsEg:m Ӌfڝ(o~)Iyvђ֔kke{Q9mT97v*-X[_#B ĖؑI0kV(X UB{0'ǙrHbE68sKrj~rJՃt>֢\yHA MXfFP;Qw .X\)ieb+x.סG_$FHuP=x~+u6OE{ WL$[Z ȳwGXiS(0A^pCCgl|ZMPXeȬQYWǎ,uؑۉ 3{0`4^X[_Ӈ-X>5\|ltNeׇU1ux5fGl4gq_+;cRF\td(F>ssn %". . wxp}wPT>Q-/tr~O`N0 (ƒ@)vْ5䚢5`i{0F))\t3h{̴_90h!0Y Ӎ6dG][N޺(jgrO6<nl {r{)/tx'M *#>)3iVJŌbOU9x%Cӛ E  6>]}qUMY&c61hC2_QǪa2RЩ J.#۪ˢZmCrRBx-6H; 2 wIgqvw|%Vg>j'{k6}!3mɞ5QAaI.y8lvԜ5׮s9Ю[]AS ΌWӀiZ (6}o9-ٯԘ}g]3s0> Aʸp# UcMK?hc 7yPǖʷ+?Z8)u_T7ЌbU0vYQe9{\wpQUL uK IxOi<"Rعw)mŕ迸W伣i߳)9@ Xmr+<1?LWY-ς0&KƨERpbXnE-j_Rz[J\4l_'IB >,z[>p`_9EZm vvL)r1#ף(AKg0Q 4:m( cSgŊ#C[*]*+uiu:5>&geNF../!1&~0-JHC}:qS"Lޕ-#_1~/ljy*wKG̿߄,RYHɶeE* a_o6ҍO9dG[RE5ȕw*9%-5[8PDb"^6@%3EeJ{J悮, J@ozHRCϓJh'5JA6rΠ)"[#aQl #Xr"aѫ6[jCaK 'muPظTlEo~ya(/7˪oQ6A(WWr2*>핼 )`=J.[+H,dvu Jj&YYebެĪ3"VĞא !-y,xzXͩIw?4pjylWK~SxET]{gg!+o1y.jcvJzOMvx@N\Sz{|?Ԓ1u!QHwõ .q,yv:@mhU|3V[6 ZmC53j5zO{c V)7n.<u쩍+z!Rr+Km-YK6Q<׳Jv$~ZKʅQ);Yz#ēQM\OI{ (JKPEԀ1lDb,)djTmhĀiN]Xgh+emBqsQ–lؕ.z4kL$ò·#$+0\`Zpo/CsL~]?}x[}`;crABuQrC=Aj )T22WbldeT\@|mΨޭ vgt8zxu$|_<8k!fp)X:(WB{S'{+ =`MXtހYNzzX_#zS?by6$^γ3瘒d[TW;J7z`mr*eR:~VU8jph-tU}h)S̩RPxeLxtsLvqbC ͠(ؑ4*.I$䩣/~WLzÐbYJ@΀zt' >9qMdTPxm&[o|c\kD1"o628bHb&] ܣigRHq>΁Tt\8 %c @󭈧#*H839 CbKx>y(,?|X1{rKG+~6yaхܭ8On=EXI:W6yAKD #KL({z#s(N-!ADOVk fԘ#)=ׇ Io[<|td">?؆{x.phiJVVuk`Ls< ڱ9[!9=ת&ݻT?1[SaޝS8㝯Sl\AbWnʗ۠rڎXZ,2;6if3| MS8\0G*w ],^ KCSWl q56.Ԝ3Grkg})!TF 'bnvUos78-9zӲ ꬟=<Vgb!1btc!%/&m`v*mB}(򡱐G 79Z顀 =t+=E ;HXrHetm[Q`-ϭ|UYe-3L q l B,rsL~a&V*S4 oiEOtgWLB۾#S*m(W19 '%Zdbn"Z9qM",=2i)FBVQ%-wpnpNMtr sn 5G2sY h63XXD l^ch{^%\Kn|)ȃZ۵rѮc4xjծ@n^s PgfJ1wdAŒC1 "N![9qMdKxkuwU+kQT,{;ё5]!\|*ȮœJsܭ !H8Y.y1J[ qvs4 :C<#?f '7UvYſ%/8[9ſ@ pH4S!xKʂҩqbmǔ!sf D7&!L1saS9P7z_]Ȏj}'n51#h07մv-'Q>oӯT-[珥"FFĂ描(X\QӡAoBhJEP(k%)gR+-$Ppdl]D[щ+\h mn|}ٵIj_֜*&ri ma?ٛ%Nt,ȳj.XqeD6 5תA9 UB2 KO+I3ts4R|Y\'MH)9fS]S930>{iNCtZt!kkeDEP0 {ȅga%E> IWP̌5aFV_c5sUpȳ_{YzR+gpǘWQCKΞkցxWP+QOE[sn,C˰O١2f"k9S ,_eR}ۧU EUHI77k|1GnAa4qHDc_{/7W>t\9CݧakoTT'zuԏzTKY5pNm9TU'4eM]i_P/ aYĻb0^1pz7 K{qMDWG A=U[JByH6Qm:7jjga~~h9q?װM>+7.:.[@W}zQx.JTɩZg +©4QQVЏ͝O.QOۻ5{Vq~`U%hZهn{I}6CW+|>2FzµS@χZ,VC7}nEs(zPZv2h{蹣P1laeoO:wGgwj%-)5=!5j1TCYK2Ѳ9uGND=.Yuv|Fˊ>0Ccī"+_]wae\]p!l{ l!+jys&FLj&wQflPGt3g;| 6x7\j:dГ|l$>ɯՒin1@t=88id o\ ';؀rFhvqf<~*C(B]^|8kX瞚yS=蟈ݬTQe}fLn:cţ U9Ӻ{]zhO~/ݰv2GB;]GICãnύ|VL1bL ʖn^Cz=ZK7\V !ڑ4ny19QG= iJYDi|d;Za\m\ňɧYe\7bD?y;VNpqE͜6 M7Bk> 2D+RVÛ`}/uc`CQIIK#/dM 8b vh RÀ'HKLF^fusK k6tB^*NRCg녘mav>Yg.ίYM#g6췳٧FQ';k䝳I++/xb!O.-.d%_ٯyW8+Tl#t&>]uZt8+ҢUTS"x0Z;`Fƅژ߇f ".&d90eE B*cz֖]t64l{$hv^ThNHERlM4Q7s=zHT'ȳhNG+R-v=g@߆97k[|ck8#avTYG{+Sj.DpJډ:R0E*aΥ]+Sj׉o4]u穝^Q;/3N]恍Iڭ>!|]3+|A8+~oruJ)_>)|f'Rp(T"ep|p2;ȫۃߧ'TD^GVC|{h!!ڶ_QMD C>o: -Z|9$ð?7a^p"XZ CgZc֩1jL 1 HA}Q+R(B(o3GQyw=B@`OL:1/!z. x J[R0 AleҡA\pFy-7FݎS^EE%%ļ71nI BD=·6YDoȠ&5(7ֳFvډiv^fLƙs-\J^=/5t3f] asu4|*j#:`}!('j7:n.^WJR`^̫)Sjn[݃9c@ǔ"BTjԮߜGGNݎS;Q#'je.w6r5QDNr@v{]NV]tɜr0zmRN| 8twt7o&󫋢z=~:Ә.i$(R-fY~ _G 6앬R72NY2YyPϤuIkq`n'1bH]VE-DP0}ʺ?tu:0=Sy^u{j5P]q/fx)xY ;Vcs5uчN5ouqR^Wf}*ͦ1eٴ_/fCjm%KWUc1 !4b]uUEK~F{KsCo}~So o[BثtZوрiߔC;M)8įm4ٿ>NDn_ cYuYf՗O'u2KkRխ~{pq~yhYMӓj̬ wdA26f1󎍆#qyчC8r 9\)v3Q7tzzz~_-?%U?]'wr3jRE!VGVN/B-o?~z/oۡ p؈)Vxuv"ug,>Nb9K>;,?Fd/"R3˚2D7m.|^4|:4"Vn>Ee=ӋnT/9WJy+Wg?*66q?Wt0]*usy =pc-<t }$jwoM(VT,P:E㸢qT!vMZx-v UEa2(f^H|EU G`A,0Ǐ@bF|yx䑋?//E7^Ȫ]RۖZl8HԝưU Bه1o  Ay{[D;a'rs#]Rmˏڣ۶>yqprl#E-ܣQK.6=snzars1.G};$vm1v+6~H/jLem4LnV򒲲!7 c톳05,3H'<{c+q'/x),/ K:>4s,0 s湞fyN,|*o|2;k协<u1RԠ9"jŚF=?S.^e.N_ߓMɖ2Xs)dY^rJdI~$OSk)LO Q(4-HD 䢬XܤUtt:<5a -f!X> ,="pr]jۮUXshhﬕ p2z)jņQQLm÷9rzyZm68zr;J`tXz#C֓=:s S5R=L' ~bl SuQ`n߭<vn=z`^|#l$#LثuO+'CfnRa:ΘWG04zw"hѸփ"x@Cz7>wO 64.5lQ̀9D԰΅]޶H&쓋srrL4.8+ه;yoFv,z޽Deaw1i8_GܢdQSXg8K>z wћE*@o`5`KP/FUXȌSNZGdc!2VeˡFgj>?`'mtKv4:qiTAvZc"m;"C&=jи@su՝qwN98D< yrΩ;mv4V]C@8PUJvږƍ9X`xv9:ϧW謺),yXfݻ83A41ѧY:)zV_?H؇9Śoޟ m=I\_rAfcj"8 8eӨw~o/,`/+z7nݺn>T30XXQD J.\UmjBnBlK-1HEq}phhDXa,i4fY!DzhM8^cc=) ,nꓳٻŮ *bBFEiHuUdL6UjF:AJM\֎2iaKz [ԖRadaRXU^h*o\~/+#0x;f Vww^koNhi5J [lq!`Ͻ!Fk  -ʜ.L umر1q.s@(o3OU?{q s~8 !u^uٵ7?34$ {(NV_U׭ 4};rW?u>+>Em4f=8 om=MَYqH6 P;q (aJ +KՊ8 `r=*IXmIN*ۛ$Z[D%ST!J0 ;< F>VT{Jq%ĎMc4ND)*od-A])ϝA;=z(}ľyt/2m[1:j,O4޳m,qs~vXbrg6D X[w5Z~tʧ>uw7;_Vd=RH オT~Y>0(AxF)tKaMv hS &vr:oXH`P(W-R!f:CعI)LN&/ W`k̓) 'DSX]XZNhrTRGᔎCʰҪAkMfRsy\>_kNCE>l}TIش:WHfCy,5bu^z-n=vPx* '؃8T+;\8|Ϭ>vޥzwRm`nSPP!$X45`1bCAq fkpP⫯2ߙ?]!GVfE׮!c !1C2@F pt`DKaH"2OH>6z-NDzP˴Z`˶~o&y=ljS4듳ˤLwk3d\ݜ@[ * n :Ϊ؉TPvk_ 9h=3H8CJ{[{X;Q0| Mje&:P÷ͫ7ߥIDaeOw}'VZ3ƟXtc+uVdZۮj@LO 0MeC%0-) P lp6I<$]f?/FRCq4?-@FF8ݒ nM΢ܾ|P4h8w}V,Pj+d#S$!H9 G4OJ.qϒkmW|xz1c: '5I'ZD<¹_3W|'HP29)s}6۳Y5Ńḉ%Ӿ:MY2l@2A-1H࢕Cy~\rS!8D&`N +Tr!J[4&F<|Ԉk|ܿ_ w0@9wj 9)ݎT`m b^k6H5x [)7xU'n8Ni\N.m:e\鉼חnS¦ANK(R.dg3x&wi__&0Z;?Kza2ō'gasrqYd| a1?y^e|n_ff9וJ r*=dlQPYׅ\/iJBV`D!ԔP")+ws=V/'oJ[RN{u?;AOP\U iI-z5'Yi=}>§P{\yxUۭB`H'1or7_9MJ*P"@ ˔=NdH 7sd=_(!t^WmnVlwU 4DDyí` `''1N%x}S'ni6|Xf-&o.cQ=o<+yy1&_?GR0-f~1v~Pe!.RYT"]딊 $yfH4AV '$Q؃esPsռ))p8WVys*;Hp{ÏB9s[Y?Z欎}[33yҾ$1-gLA<=p)LR5HyBRwrd˰@d*;\'Z1O]TL)F?ry gj80V]g>e]>=rϿn=XR[5.G^ ڗ%}0s9`DWNO3E}Qh.WP+=6z{uYK>zu1}c@9氖`/e,L-e $wlZ=E+վ%(":$ +ooL\jG׷MwP"њ1PVY}dJ NKa@zѽ_7yq@M7Tђ5)hW^4RO?)-5L+1S2B*>V^{ӌfqSr~.'^2*g ]Za%PظY2 յfM_(g4WWPz6{]fZz!{CAht&ABbiRw(,f "VX)jʿ͋+Fމ DCVdi q(~7ڛЭHf&kO"#Fǂ@ 1'=hQm`XHG4hmڼ65LuyװgZSE#' >N=١D%(Apޛsk#!B/K*Du%žD^4"1cԨh65*p^ UjTth.E:#Hgd3F`Z1T`:Ki/RSo(vX"YQ-w,;;KٞI%[`7Mw)jceq &X(X3Eﴷ0'8bA@.#y v`,=uAQSi {8`c op:%s)W3m6D TaA)O(:Aܩwl-uiRPfpZ?S`Lu֥}HQ mdSQt0 "k4aAXAYg2ب ,u-iܦ} eq{\>CN1 X%0F\ LsU aǥTD*cfa -'u˙ ! ͅ%=Ӷ8=gK!Vz,8=M ԭxwCDS9J`$$U.Oo!8՞XI=m68#YN|P$v+VЀР}(zE/tl̄\7f0YZrǫm];mm_FY0Kw s )5@!p<>fAXNUpqꄛ2¶9c:]}qѷeO}٨zc4Ro.ӗK<$v Tk_7Du2΃ !QBv62C8;:&bFU͆ƾQsIiS`iH5qX KVk3Glk_vN–pl_wW q PBE&qArC)A$6LC|.B( BFXPc@m3_?{2\>|1Lshq/fmH!SP@0$ҀNؕIji@Z) XHҀ#vo Z&?_J/9zO |ۻT!@,]hBs®cլd ɁTGX vZE%܎~?)V^/^"u|2AtsB<c2PыTޝ zᴷ{Ori ~f⛹mUEp>3g 2="^1k=[>5 ~i m4&~4,0&/6>,W\%8(E>x\R_omoa^SȗQᅂSnpd{һ%w)ՄWYx_ ?Av);D Kven]֕Br 3+?-lJ}6hZ4z4ӪӞ%i 4 ez۱,fpf;1T *)Fyj+TR2-|TCl'nc_G Kd9c|5e܌\2\@FBu3G-ڇ1SѦakȶ)b@.%t$N=Cy ו$N7}%>3fŚY\$V .E&Y޻ V3_FCgKFUmv)*N8Zj3U #+"=>wk8Q+PI,R/K(E= z'g} b"j]9Em7vP\@M2gRV0DH c9-]aUTebv=S2L]3Νi%0*bb Z3j{ {59#>=]1PF D{sFEfN3y KUPRUSURdv1…"$S O7@'I@dqx#8S lꁺgdml tĽit;ws OH^EQV9O -,Hpk_fIa cf:I&WIf4:Y(M{IڳÍ Hޓ޳@Ffp2lEky0g%AG71J1w ?KkaKD=~sn獌 Ǒ|bUJkD8\Z`@ aXlO :^'Rz uaknwOgW_ ߲rɲ0뼷87I/Uf&q(kbsjvI58yr}=|d>],6,Zj2OGg`԰ ~ڹ? ~~4e~ŋ@ū+_o^N8)U7?/!%ҫ;L_n~CcR &Ep-~Sqz߻rwӛǰ=\O>galW(m-~%nhbRs [nX>{5|x5.haJf3L L\@$EF3 (a1RH)=R *gɫn gUpLxd9Y6,Hh=DD{ztޮӜ]@q/TֽaWa-ubʴT h{f@0,O'8 l %o.ΆcKhNLBpәU>%3ɅD^٢o="Sɹ|lCVR3gr?VH nqPEҔqўœncJ d^ABFO3;kZT)Q.H:*xƯ즟AJӆS(G.B [k5?ǣkEh=XM.̟nB/ ֪1ox]w+uBOhB&,aBT9b )X2Co4shM *ڵIymR(u*},0"e hEڍwG_ziDcUCRb#(ki82,DP]XƓwy˾D8"(9kܸq97kf汏JDa֍h8ay,캩]hq (~f w'9@s8;?̴1LT*/BKe+5dJ) gA"E#۹y< 2 %P]gz3kX_QbCKh`咶'CA5hWy9\֗kRfjsBsE=K)cOk}?B-Ĕ~(3 "3d[3KtlU೒*'Y>c=;T PʇL/G >!8S2EҧTCG'ƕghv\߷@#&s0=Mʹ[933II"ggӪqioq׳i}q|ۯ_Ǔw Gl}>~iOi$ҧPQAb8~05݋Qa+-.Yr̗?k4s~˚/-Veu]ѳ~JZ{͠O3/LޅtgyA$3ie Gaw}Uo #p'L*icg@!*+ V9 v^B]Hd:ZspCa*&j=)VY&.P6S1 e~cOc]W^M!o, d^_S&^TJ)^sO;^;i;^~Acj (58:cFƈ3M{&h>Xw~ܦ;K&D]ZFqBp$Ańd$;&TvLc܅I-LK-M>ډq ;h(ƒL=cJHB;&ٌIdMe(e-?/`'Zi.נcl$ .cN7nn Nȸ:c:ZD VBHT(U8dBy\˦k4mܺnP +[_p3EmAPWu2o|d8Qokڻ:v|{sJ:}%6[ujrmYl!I49#5C!emƌ5gڢJ55xݔi% ygdnhѹN Ҩ4rT j%!#tDe%,Ʊ,Lꋫ@R}1u<%PɠxAGFRKPgFV2F;c4GV}?[V[m6SOc`BAZ5 UV2Sl٢JT;_y0|R2{\-1wJ,J7.4Ak(;@hoY(E([ΕksW֏E!b>ʧoC !|́p F)z+! YȉdJq2oo|+/w]2˻B ʦI?MYGN8-/m4Fnڢ6B 3d*Xi$pHQ6T z8b5t/f(]5 *S3w7fݻYu3wk}햙;fNL 2&t@:V D- Q+ՂmS]@_ZF(BzIwp1A";yY ?}߱ȎEv,K+ ,t(T>I2S! s/rZv_7C]Ӝl,-#1' pTvy:ǎ?6o.aB ey0=㏒h:؄?SKh,;U2JlκJˬ=r!(Sѡsi!$4l:Mu2zS7, C&?t*UO^ g:q2_3irܑ(ϙ\#,νxcp6`!RE^I T_+d[ RAyk5{fL~I>S_Ml~L1p.'x{1}V<7"[SiONS;(Y˗>G࣊<54a0&γ0Nx"5:E{wPu-VJr]qtW~C TQ#M9T4jWCZ ^UIR `y2 Q:\ꔆcp`!N^iyWmSQ WGk^rqUW`i]w^jDz`ֵۀ|Aɗ ~;2 >oCڄ"J '/<ZЛ^ :xz |JW0+mvw'=}'g7hyfC{=~l%3Q۴b :J}z4RWGCxTYq"0c򶕷iQ%J\oMࠊ%BD5R*Ϳ=_- J΋`'8V\06V2նwm]dÎ@VV#f1(LyJ`ؖ2DED@nϗ"k!V, p|Kp)bEl=('jqؖjTX_I;N%I)' b֖a6S6Wẕj4s>4>Tc~qg== `oCr 9,2Y~19AM? !|FBkK:0F=^<%DwQmmF0KoWrM(PŽf-; QE))+xvEnH aC*H=D 8~UJA|m3;TvFJ*PMp6tQ.ƊJJfU5{6Fƕ\<~1,x53Pz{3–%j#%Lbz vt65Ge G )jjD;Z]nn}"`cꢰD` za5L3Y)SGӰ(=S,r>fDqµZBIIl ")^2qxd|'F!Len4'Bx‘]箇8 :/Fr|,1V =DM]2s05q{: E٧[3 ngZ)Sm|9+Uߪ_l~~,U%?6='HT-kˮuc¸o`EvKx@x*D@3C ުL7My V+X[{-Apw=Gdg [2>6(rv6S71i V~ @]jR0ї4T79|`R{qlJ]Ufqx{WM+$>BC[ ~XxeG+L?3F*BԼV9?ګ\:Gx!ӵ,Ѭ*ϫ'3G`m^l^`YWZO c&[FDvƓ&dI$f(N)GbmgN8ud?)T4G~!J܌'+)34 H#TDI OMl-?vD;"tA=QrW'&GgV|K;xHjΛ-qn9%rY$CQ2A#X˽ԟǤDH!r.L]Lk%̓a)ƥ. l6ψ׬KшHt,),`LfE"R q^آkoD%xaetD6 %(NMV0?KHH]I,O\ܾ,znR]TQ̈́J4 Jh_ZΉKRII cF!bJ58>fȦU@aVș  q oR g?uSZeJzGonf43gUSi TD<:bs+Q ,@ͫ:h}ɂFj2E``Y"aXHI# @,f lj%r#ː׽`SD"o29Ⱥ :}!$)᷌2}m<ꎠRnQ,Qmگ#/ iX2To;&Hs|uk"50QŒ2'B[e9XۂS+vҘ(J3n6lV{ (D&Z iw*l6!L yE25Xщ)DN'|%8 `R#R'ʉ^-(>1PK(vN@Fz@_kٺMid!}.S޼S VOByzʝUbe ;I 6&J\8yq #%^N”@8aVۧ&yfG}%Y'Ey}13Ը$a1XIizCARfj^_+`p<>Y4܄*8Q"$FeDoܡ-#ǾG{ŏޡa+YsVH@\n7h+wTka)Z=RcǝCǿ"$YL&6jɞ6_RlގN |Y$aY.ԀE뛅-j $aH4~pE)ublJXr"ޥ[^ۭoQ7'Ե[謦w_;]Ҷ||~`m:O%^f=+\VN].Ǩ.VBaɴ8w^@f B D9y.#F':-nq.ڹes;_gS%''y Ih|?sbz[xyXLkB1}A8 Ȍ{d p`Zґ9Sx4Ixb\Q5+8ҕ7^:f;x:sYߺXbsq|pJhJ`ևI +,EetT\rgy6Z%ֿߚ/bgR0|?.zn6x LmOEnAC >5)fk*Dޜh^GvΏnan#4GJTa nkP!|9?D_yߚ~[w>9wm֥t;;|3?=hk3Ý2]㜔H&,GPYpv7.d/Β%Ȯ0zkh {40睦4%4) 74wsM 4ҭlsH]g|f |Ɍ.BõI%Xw\jUf׋7'79n4zYnOޒ«p&~׺uw p#4ᘟЫTW` ,٪(#?՜1]#eT\ק]\5m )1P\ L0*zXOan'AicTܪs|V;J N!~K/=O`EzEֻ&W ?2X4& 6\" q ,IOY1EؾhFKib(.B1R:%i)MIqv `Nec.6*~% 5mpT?9xXk;AJ۷CQ[<hyYq*@?l$jHL% N;l wcBSj:&i@":⍵@ U#I; Ev y]̊g%4F Ak9 M[mf6z|ۮ=˛ʇ렸[@SI]^vUkcN(K WХFMP86v!FvJCOjZZFOЫ }u"A jinq7B$~HLFgD-aȌufl4L(ޕՑObX f0!`Bn$˸҃ y rDK6(Va0!mBb]2T*K6+KhݜU i!Cep[u1xm̼[ekܲʱ,c ||3 17B%. = 8ךOw>uo=rB5^wd1swpm(Pm6KbKSf:yr1{vI;Pekk@UWb>fO~~{ [L* oI(F?=rE*+Eon ZmA7F;@]o"N;9/SgRQRNyZdH={?P?PhaZis.N eJ!\ n D)@$`!6bf@xzw;| o5N&KT^ GmV3Z(QkQ&QQUj0F!Q49b1Cf{lax1BԻ'GPfWL*љƇ}O&K$`b)RӕMJ\hi]띰KլYzmTpvy@wVhУDfУ{GzУG#j5]j΃!ʡm` .y=eǙ ϮE&pb&ϨE7}txrpg w7qaD:3=kSڡmiw3]dw ч!Pv[?`wLG&ϕ.f`ίf qGO1_%011I^/\}7㾳65.:`:?=W߹Yx7*QuEVlBlL a,`D/ש/rdbh !H-L17<𔣑a$0$яe'\WQ\rrЫD'Gyf:6Mi9Uݬh&&}79֓oMP7suHV&r(F$2Y{l N#㤒HD/@b)fIdI}R.脋F=[:g"FŪA+- m欴>P1FTR'T%Cx@/^  2[ՌW"",X.K@Q́n,JRN9H 1n)pĽ$(3d.%/!5ުQh>{}d+ŪwhՑm.#mA9)fHR[ψGҧ:hd-bՄ>TۭZzASv*bkRrB. 4KRgqCLUے+@!G'-ddҫb&r@; IpDOD|UX%,b)gruҸ|//`ߩ0&j w쎢zaH{}3X$a`o n+X=T`A!'@0&ݞռ]7i$KJ|7OOh{uHģt =O*WVAX } vAQA `O4 !JAl>ْTRpr Iz_knf"XQ䞏T050g6AVsʞOʝi8YmŪ+ѳM}hSPkĦOﭢgOU+LJ뗂jۛP;+'] Ay2п(!^L'DC0e4γ0ރI2k Q ^[2 1 8ԮDBN5e[47nVaw??~Qq(# b2DM,Bɬ0Y2JG+-Rd/QF Aﶰɠ7cȺ|m T銻hv'uf*]gc~p& 13W*0 "M j>,5Ԋ b<ߕ2KVdu8#O ZШ&P<)H!30ľ"qF,q(FaT%4N1@Q `Gq.o.&50_)@0ydH$C:1;Ҟҙoh׃#AY@ wP4G'#q!bFp .FRV`za Xku0ŔP:"RM#Uq`D ib JK "(@0!980V\Ef'Ia4t"[X.f$\H@f$J/8L"HL &7"لH1XjyOt1Ls-/:&U,aȤݗ6 n<%YbDK-e#S1g&ә7"T(Q!Ai"[.\yvҁv ?uU< !AGB H4)Il(^hV߈(4%pyUv7_Ş,Fds~|tE ƫNvմ-CH{ϕ_*;~fJC:{IDuO@1?Oӛ~ZxK;mi.B_dٞɓ>?Y3ă~W/$8yˏ]T=}K',wF"?#5}49515} B__]^83 QMz4sW'ć +,wD E{!0xU}.w w,7@DBslkDqa< Ϟ_xDYR O{Pa³BH% JqÚ;vߘa3JkdliVMy :3LJ1\bBF58BCcdNS㭮w,Mx0THޙ G|GZ9`Ԋ6=([36=zN5a .8\ x0]IFQۆ|`Q/k @gQ V>{S  Lq)pr:xUДytUtU~CUe>aUpj^~luR4,7H'1NVJt~9}jKɲǣ^LY7|[Ut\yrygz3\u{+o!?+ţaO^䜸Dա.~ޗB?53U%F12\ު1;7Nh869L6I)ɉёE <oI2ZjT1XfzkŒ}$mV/@ې)Zwb茥efSED#%MftIqflrB,{eЎ - C,%qXJ3xBf9Fv lHr<0fyhU2#B-i"f |0@e&DiQ^%]:F&p@bLr6>5ԇ[㕵d]TȜggZ{:_!eЎ0lǰedVd{!J6eR${yosU!"B,^ȑ:[$d2*tb-8\ +oC7Νo竺i"b8DTj2 Ug+}dIhE2~ΑW檵&JrA;@/AYAs唌 !T`Լ8$|xøZN9 ilՌsfUU#CetnׅP6 Q"$\VSPoDhwprGːmMa4:1-*9ƽj%zP?(Tߚ͉['!R b| <aޱj bg<xrV gL^r|xzCTSpW¦@0Bh2 3l"wZ$4@> | =J¤ԅܹ 2 pؕJbJp+ZJ2%68CLa6x!ᕰu 19X82Wx^9^ 3(k(:#qF PTmdrN+ObXVM%Ex Nk29 Qi*>BwF8i/)2k6C=ZQC(hU0HxZ 5GH+EDY,Dc°kMJ5UWuD(MX d jQG݌#e[Gp|HTJ#M`Nw;:Da f⃽tz19/g,2*[-kR OPC*g`I>^bVdLhRY hEA^_cc2RG b]MHT" c҃c?@ 9"1"HT. U "_MQ~ _ }Q@VǃV&b<oĭ |tlHp# V^n4+JXmA6SRJRP `-msʩv'SaA[w5$2(" 0=^.-EyT&ow (]/jUW54 Hʀvo !5Jp QdL1Žj8@@z5t<ӌhw@W̨b- R`aq:D(3yY]SF@uzu39(v$~2C #RFdo p0稬utMD*CA2֥gZ.7˩+gB&0QI@WT*״YߋYu%^!@̤码 fǏ,ԓz{/w$4)Γc6ep!95M#YXAjBa=\m/_#PZ^3̕/ ͳGC"\tλg'pry5GZCx3 nnm]YK˽8a]1fMfgF=G2zk>|:9w2-e%%gư{լYQO,WeeaK/S'/R܍Gkև0>C߻ _-;Ft } F7({y0)֩uex{*9caH Djj9N^ZjvX%VB +v~QV{:0]J"]:F=*sc];JWHVJ*y;Qa]=\%Dऒ(%iG w(! /v!vg;J؃@0nG w(a'ۉvk(G ;!9ю>QBf]3tः_/FM8mPg#*c/"ĻO_ӠAkex_ I¯Kg%Lkдsi|~0QMR[,ޛ_\}\J1D{4 t/DWS>f?Z^JM R+M ;{<n b~X!x4}ygpS vͲh=&o4)Ǘud2An-G;~03JOf4qS[rXIZrn .F"o ;/lu x~Syy\ܬ G~-_0 f.4 s]ӡHd񭯇uWPL~y\|e'[ΥHׇ/S+Yz;EU/[c(#NBwzp> ƣ2wŢeX]3 1wilLЪ9hy4YʕX%vp$׀9)V5Iښк-VZO:t*lOYV<^zjvV/n˜VhmZ=к3V5|M7 4|M7 @iqNj8ᤆNj8ᤆ[w&~_AՁ{G]];ppήy6k+w566 nb"nUo=^h%iah]Qz^{gy7:WqM%?P䍄}sc4$Xѐ$G'SvlL<5-6 rx#}yrgo`?kS1Tj]M,YrQT # ˦u'wۅ [ط3kj,`^"w_UƉFjqF]I=@gT+o釫ޕsZ+ Wl0e2#D!@婫ʺ$ L$dYCzTׂ|T2̠)f] |4p0 O'=1C=*Bj}C$V9(Ch1LncO94>g~|"GKf_9M30n !7:ф4LaDT 5d`KÜQ*QoN,ԓ|t(f ՐOg&řNX@/Ji$f!%s*b-R"Ri'P!W}JfJf!x K)a$i,J2pjTe e,nL)#GCP=T|ZI, HDd k,Ar48H2J*ՂLhj,ȧQb4( Rwv%&S)1@83 GIbQP }^T'#WޔY80qX;^:R!R$'Zl!VaXr;-EJ;(`0?|/m$\wռɵk直ݞe _+f(F(؇jᢞ|Mc;Ye6ʛQYREVW Q<=ڴMw_8y 3w6]u1BXۄS^6Km WΚhߖwDY#P[[߳Uz](\_lMY5 #m뙙Gl"23 7H$B`¥ı (LDMvZ"kHRqj}`*>388_u)&}n>جMs-/%w5-*nF78|t ftZn!DpDԻ;|VB}?&{"~p-Y4׻kDclX~bs9Jˆkxu1^ھKOI] 67-Ln:0k۠|:"5D K43L*",f*1;L'eEV" ƽSJRF\-@ҺՇ>^Y xY܎a#H}tqAzzg&IWi45 y|Ii#L|~d_gnO ǺulYr]51TbH8KO䐀pn$ל3>{)i溸rVe.3o⥙"F%\?3Hᑍ%p#pw:)RbSa4w@ypqupa2IE}!&YjQ g|x_ktx'BBE h="/yz h,fh텲;R=F8?lh~ _RXjJhq)OIJ$:Y $RSbRlqVl$Z@裉S|5G`S ABovmOƵ7m>BpG:kkwnjW(O\< 8<_C"BjUoIj\p Qm@7M^Q "qƑ,A1&BPb-1p.@c)+F n8bÏ$\jMЫ=S/\hۡSY,سřAA%a/Hm %u~:޸5%(EnU5Nݘs+.LC{U8uygЕgNo7>#\=C@$ c'26ØA43eʨ3gJCSʑFC(. fU ':6ęvԈ5PXL9ۏ*K+íL}fu@!@GvrM熜 [ZsT}e-> ,g #CJխtRJh%|)X`-( T|ʟG`EϘtK/gr?+o>< IfA/0seC`:Gմ$-kfֱi%=Kt}ϊ52BmrI;SD;+*S"i!8G)0EAL4eV,kjƱ!ʬYpދ5Pɚ҃Vj :WH"(e g$& ՙ% MSPB9fɘ#ˤw ]ʄdb(X?!Fkʳ[.VٯN-_!9R`U%US-Wu߫`!4 êY,儿 W SR4dJU{6 ba>6w~6_Tűl1 f`ՒvAzu]vFb~ID-m-oF8J FwqAx~;$㩍&v>Ds/!h DXs@3Azh鬾/Oק_W@4;AMv.R 'b&h7q} q!Cn(gƣ|rFL2='u2Yf1/(cDr?TZ=;:0tW˻=O+ee_<2a+il'iE'.|샐Dw.fextHH\Vǔ3C\0d]]mg4ݶ&Fz~Ha8m8ݨYݎ/";!M'q\u'Y:'Ŭ7t@\81u:;\Sxof%:u{>K ˹Ꮗx @Qdyfi1w@k_{*qrS&qBTL93>EjB~B8n|1PgZ-&|Ly&nGRC ٘ "$dQ lQl*G?Yw攩zLa蛐V!F_k~m8 eD]f6{nw]P8z:{W7Dt_khƗmTB,BP̓?X2d(߰-rx4?q:˛ =,Bq vvT1D3J JEw唇t$JL=qu6/!'hQTwic*هHM]`KAIWiw[a\v')W1EX UY-5=.5rO5WN0fhH]w6Pq2K8Y^%]>u'%ECwHF*>De9c*@X;x,c1S3[ժ$تPg^%ӗL{z̳aQF*<B)n&Q\(ApNrV\"(Wu)o3~j՗B/\5k,jbTu "jT],|ϥR+Knr6Om0 )B]*6#I q켛4ΧJKy4e寝i-6"^(hhx[yޙ̣F]ƒ9?5A$DϺYEeVqؖ<;PbDxSM(Ww̢] {#.إj/BPAM$ > ’ٛuMѺ7//Dzc1*lКP}f򶤠^ψ,kwmFB~\/"ܗPs7sW+P({cMW;.g@@G /_vodz$dg9ĵJ =ק4$X< _i 72Dq/gGDЮ bt&;\>|8]{ngף)9ؽ'%\A) |~w]^Ԫwc0mYytJYkO[Q:_wp|qwьtbw|V\zt;t,{<_.bdLwv/I8s} ]-JZV v ͧ#Ŭ~e7B N]bsÏ67\ / Rޝ+WݠIm2m~o`}5F|v5n{ &0z`oX@L$|9Fuf\(Q{n!uKZJ bCs~:͇`\|5fNV;krYz\,?$$ &Q }āP$܇(!J8Dxsb.CcCXTA@mqZq WҶezqaVy=i FL>Ze>(0J25&M2R YBrgj0̩n|U7ac1Nִ=/=* /mtch:jQ' %m\IxƣE{[Ӷ4įB>K8\oqc0I)#=j?ɏpGn2q;XKX.@ͭwga1ޞ6<,LN/te0l:}h2/o>s*Y3O=Iýhw@N8@G4VBg>p5' ؄OOޕqdB)LŀnLb$N^ƁQX۬n Dj9PSd`D@'9Ju _^"$b5 ."EmeK$(U>{ASS{H1TLː~#O׋Yu$Lb9W[wYK\JFGkwFGvFCCEݎzGyyW1+g <_| N>kk+{y )u'~Xj-ׇCz>kyBpCܛe %zbk5?-ff,\k{j g;Ozj6Xfsθ3#\+9Ǟɔ~\On=s{`[-&HWoe.H19気qih qք!cOq 5vU.H s<3`d GamG2P `1OC^0Twa1"15 v F'E KQ萜 \0aB*)&PtJEc g -M5^@fb;Cs$">sY۔18Y^R8es^AZ.HH`+3cbW Qk%N NHL,HeU=AH =2X^1!H#a|HKc% &- G"Vxf&0c( )'8m}ȊmPR30n8)0]Q70%ɒ͸@DykLɖ 5@.Jnp+:"ӛ}:Cs' .[4#YqK׌#%i:,.4W0D<9a*1wT:眊s-)x39*@<I]]XKa0#YAkހD*ގ֞r=4x9ϧ۲ |ոЦoiE-r3cjҙ<@90J,e|(HO1'iA3d@(E"4nQDE@"D0l \Q`cPJ?G!q7K!xm<]+;}#}~sPL͞sRס3|Hpd⾭>}o~Q!q7$[b҇3Tat-#e#.!g|w~"g'; tE9!hh<;4,`5]B A/4@ahԜaQtXT f 4]JX29aQqүi٠,zPPp9T e(˷U .]MtP3wg'>NsI$mE6]b>ŪOCksp=4늑.uJ?7ڤpJ?>,?^?"b6YEZ*XUkt~H)c!^@1%S{-a+t}R#ٛ;b(K/+tEqxm+8x;J/ ~>{a'f]_-,V߁sjC9j$WIZ#[Z$&"*(˩zDHH}|2ztŸ_=*=K6%EϖhzIǐ(J \ 2EIG|hu5m,yȵ+|~|@$rj~xxnt{PԤR"{>D2Wճ-Fb)S"8t+qyb%wLrȗ!':m":yJȗ.fWK0+/:(K~.Ӝ\bFl ŀr<("3p8hQ>kXqauAOPdVv]wATN).T݌Nf֣Kq7 A Ѥ8IЬ6Di4~n#q>HtT:Q3r ia ぀ ZR4 U#TsryĜk)9߮%$fۏ;!?ZSI->dga*tmbGLv\izϼ&2 8Jj:^Ng^nF{YyH*˼t7ٙRߍ3/2x̋/o|ɛQD5^4@bo-h-wܹ9ϨyK\V\֮ Oc>9qy3j5/\L@\νQՠk^YK69߮38y)̇֒ep{Xrǒ-hO돟oO"y; ?sBĤFOmLЧh' )>G ;($2Aa(AAx^]1( ;3%#W s={Hz-q}"mROh )0c캁${UGnciƪRz򥣦f" `XaniWfŷ0!1f9l[J1}RjcnA2ױ]OOrG;NWxxq/QJK&P~;4ZJ%=4U3BrR3]dʗnP8;JI;A^r=k#i5AQGYk[ڠ Qk{͆+075k`~{%#L,qQJĘ@(Sk41COfO>GD">0w04X_=wuAS5vC%_307\f2&^{0qFW}ɋ<<;yu|Naw BFP9Db+\8 Fjˉ6pFB'D8 G Ж*Cu/,џ‡p뾵HAtD)V)w2;k!f # \Z#K愑\9E^Ka繦 J- ^W^^~zaN| (aJ +KՊ8АaO h O 0perI'QHh(64:}l+2Whi"fTJJ*TYG4~w^bqUӀ^%#\nd@aL7={㓎Bۍ:O?fv_ rwLZ̗?6*}SQ)a]ME~EҬv11qx4iBZ a!!IHu`A*bI̬ 65Y CY(9+5X^7»@׍h/`F c` 0,(byY鉖o3 kE5yZޮþ`sR«9Y'^4&M0cs+DH+pBgEE5QHKFCsQCDH1fvcDZFyHC_0JM(TcGD*x(V1$3!t/:FAا&Bse֜c铧vAnr޾/- k :kE`j!]f'E[p1tV`k!l`ԞGI::ay͈p1R⡋o[bxܺ!(pbŞh2)A ʺ/xuq2Yޮ\zٿ޿uA5,P\O)@&ϴZ'f  T|43p@3p@K@V_  )>EZ!~D "A z fB@BwS(ב! )xƁ4($0(ԸBpNʡ}y{}j:Da8@x=C1#Qx!HD)_i^%.?H#<4~o"?[2*V'vMEf3E7l~9cp6~I2\Y;;9hV {}iB#krKyB/W1H@R^0zdە3rg]g6|O >^-lI^&+slc$tB/!\N_`e;ӈ4P `1O &8R8c;F34J ndʰ&G3rURt溠Kҿ=FZ _"(Bd @Xd,H$ ! %fODh1g}lb:TϡWi.[o m6D[oyd0aSB53qU`bD!c .3 s&"՛&WaZ`0Rǘ &XyDSQ 0e֘䔩L%scJ&5B!)[n8r)vO#2 g^{VqJM Z;"`g:mS:tP.p ͠!U˹&3!(ރ1c (0BH.# gCK FdFE<5R&xL i䢔!ZMF$~~W0`bFE1F+ !&8i2HD%T!DPD,`Q.xn"W koO5G1KL0S*XT)\襁 b8پtEA5&%ޅCdz(bM(d|0Lpai {dPxx@bplWKB0+F ˆFB+.lXF  TbP b8cZ4 &bɱ&ENKc-r ֫U) 25aP-(b-yu) ӫ}pt eR=Q4Fp`EP 4)ǀ(IiK lr{b 1W"()+I$Rzh]AcS-%^`G F:55q !Q4<=4"'TZVf*~=; 31]<.2Ur۬u5[PV p!%~ᨸ,k6dXoA9B-+DB(B 0vdk$'pN2 @)\ d"#B%@_/ P&_ی@~4|Cj_QlX)aEgH_.B{=[,5;)G/Pvyjo:B:~JO|ݳiзG5$^Je?R Ȝ te1MVџZ_;oj3GtwfeBƭ[kOJS݊؏jZr=|z2/]D& vN_yF7RBcͼޭdb&ӕ1cn]ŝZYzPJ#Vzh$ۿ Km~M?1ݰ:ѭpUwu}IX3bt /EkN@@Ntw8lEլnY|D2H<&z.jM!y[G߽=hY$5\]DҬb"!"际="I+37e֞HzfaHdamnYT)6-7{1_wwdɥ%0kh= +U(C^ثQ}t^\3 {DBF: e"际="a22=*O%FĻ'"ן2PJ&ƝD殓Y#`QZ"ɇ+fٷ%T\cp\/! qR+ʫČ!JIHxٽ&vy2O6<{l ^I]u#Q/u(r֍oy q S64#BbzXX60ĒzgbX"V9 %?bΟ*DVQT$r="Q!.l ZyQYm<8T/vz&/,azq+aF53C`!!p}0QGu2HKP4d%jmN`-$ƇLa-$NYd!AO#0!k?%C9#;~nG_V')HJ^P6CǏVࡗԨ>\Kgq,ڹN~ HWTmT lXsǗ!u;Lw =+.sxXךHB~ŦQQWquǜdDOnGJ60^ Fz@b w'Uns@:t=r~r '޽E!]n}nCH>%5 <,-M7S뉜̦owT0ěnvL^ôM[_4uct:(OΧa=|Vߺ4Wche4 by$+DI. :n:o+2&=s{]}M_m& Q"pB|*m(TgZ SatLR: TVO?g+&&&O++@'Tq.ucnzUA^gLYskH}G(¼%,;|(VG3F} LO!{/`_28`Rb̮[X$Hb "w* ¸#@%=UVTf_=Kj+XyF~Pʣ7:#ƼUwGO Z_bm$?,݇Q ~iϷo *+ 1jqee?"He!0>:m<֙`)!^ x`.HidΥdT"25%q~%t(N^QNFb@Q@)!cuu\=sj}0쇷Wl+?ffo弾Zr6$52QsDT:O.'VJK&E}3:c&3t.ƹFiE",7;=V< ˡ~mXE@PøQ/OPaTHw8^3n)o=pMDy+1=Y@}͒+*മzdc7-a;oo'¡oՆY;Af5:i@[giP,DO*h`W_Y3ً\5 wF9#f< ߌGS#N-!UtLtPUg+JW&"PgX,Y:tg~}4SJedvӶ8zF㿂ua,_W'o^b_'#?1=I"u ?4:ܫf3s3͸enP߭x+Q%{ 4Y}ڐ)!, O͸7ᴗBx7s9;&{M#'PvV"C8RF`55Rjc"H)F#TE,mL w<\l"1E0yS,4:1 Q*Xz9qG@&8K|nR[~oW̺t`푿^Nq +~﷣D)Qۏߒo>_;`'?Bd$^^܊ nۉC=_,7+*; ՟\/d6t Om%e)ï009zcn&qϐX1 . BYGV1"^eP jo؂XߥRԦ+nH7~ĵK:XHXyv9!X]'\qmwԘ 199QxQ=W+Tbg#7bM,{We]$-Z'6$l< '1n˥ńw.аUarz,)1+zɊH~i/JH)ӝY2q 葃tz=r(r?Ysxym/7˂Sjv2¢EJ58HTkjj{M7yo'}6IlQHp{ ?^Bϸl3q=zR_6rn$G%xIo5ϋgJKE~);GkL̏~.:z}]Iۗ ZQ?VG@,Ƭ 3 v$C$z#.&.]u_?j٫b.ܬ +x*= qj{o߫ʜ6dIU.͙auapˉ#*j*9%U؜3X{޽-<FX /!J?+;K,ꮬ^w. "Ԣ@T1bbjAq)wSHRsUw 7]-Lͪ%$E\;<_4 o'~BNLR(Pa6Ƽk Y4bpA 5E*ki8ׄ5Mj,шIU&#vXbB0*J̓H{:[vwؼrB5ͫ Z2n|QN)l7K1iU]ۥFJ=R[w IB0SG_%‡ې9`bB㊟BYF5d"R`_~s;!Xqact>4h@0}",4>xZ^,r`v~@t^@5^DTBunXl6}Cٻ%W,j~`slq:x c,inRJ"ŦMc=CQdU}Uu!̑cϳ;>Đ~2Ǥ6a+> \D!gUk)N "$sbPC FmJʰeDZӫLQ[jנ[aC?)|P17 mIN>}l<=d.r6nEo+3k?쭋Sڙ6e +;rot6\ ='a ^/مnwJLַט$W/(;˺=J9l~YHJ0I扲oF~׬.8ZNHVQڃw %Iq\鉧D;>qZ t939sUisgs9'0tR|ԫBlB>qPS|4M?D Ǣ ?6.KN 0'4ղAiEq"TټHWoOz?bq?<-k@XJot}qh-M *e? zN̿r{;~zH.?'D'uBX!7t%Rnj ^*Ļ46z A}ef❃y6T:[}]\ sg+n=G=~tqU)o2Cll:`WzznoY0uq>G;w6 LZR te,CYjZؚq 8nΤ{6*z']v t#]tӾ¿g CrM Z\7З">Fzįv=Zoc۟r)x\5})ev5q묜yo??k75Լu~*&ٔVwZ1tT'$$IRD8+L˫k(L_ iС%:y>ou֢ :WOk4aM=pv[~LsM}5&Ƞ@DȓOߓ(H)CGI(>jГ` \ qt0$_ tN#͑,-(]~g݄fF֍FٮT 4(lTЉeu^z ·J@@scɾi_6f#̨-+{M8J|z1Cl`r{\n5ocZᲁZAFy w;32DїB.R“&JWsafjԕZ,kV~H+VXu9;(r @F(TT5@m;9^bT_>=,0%8wf釃qIb%͢8xt%.Vy>ѹniMѾ&Z沈ݫO*n9 UdUϔ3GliE+hvcnG\Bɻٖ}ܾb{ ɕVRb !+t:Rnj,qsׄ] ^ݭXL)8RiQo5F4 JrYnjZJ=.TuI[=UW8krdTݣ p)5>AZ=p)6팍Nه췵T7K,c7 K̏ƒɁ3U9z+1ےB2@':k/QͶժ+TkW+z,P|K"A#ڳf(}dx#SR`k漑\漵m#˗`'X 1/L( ("e1G};r,Wϰ+C!"G2no{ T:ɰ7J1ubfV]_Ae/C{8{G^w)2п=R>0:)`rЭ~x4%䗫X P#2xU.*3ƻdھ΢r0E^TEQ1Ic{Ne3Vr]: xs`4 7 @@{t<#Hh t9#F=pԃ`ktZJ+{Ua $f #zRrj* \QD3͜-&h#ZrDgre\ U3}DB;6Z:9%\݌`ٞ\ @Gy61TU?d8)(~[|xcUQ3^$4f  z~L TPӛ*f\EWE)8Fh#,1d`%V<- '91#\R{62xQ!~Yq"huZAgqaf!v/xIVYoSJQ:JUoAI]m뗙E < 6֮ <2W)1 #uu[[4]bE(8Df_iLdy5}]iޛ-Z0{QR;}7f7Wn7w0g 3o}y'395'8\2?6VVKjgL"~HWN2URSughd`L ˺´ˉZ2M!FݜZ-fHBf+WB&ܥ>q]{??r-C5X^P/ǥx׍6ԖP[jCm "yJp`Dc"?xQUU2F#be@޿57Rxn^biy1=֥ luIc|XF,_~HX\pP"I.ǹF0\ LɴSP6 aNr"u`0q LaĤBbGȏC`JصDL8*1%hOJ^]_V,u û{KcBcƎGDl"W  Hbœ!u s)_ABG!&ˏZ+B=">+FtY$O 8 68Zx yDB eɕt]397DW! qv>#C2Qx)Ŋ,N+gsAx;j O;AP : MGwB0NQL'O~R_+  5O\o*.,eg}G9!YnZlByAYnHzޥ :q K:jH.G$dX=OpNǂ4ڝ!S"g4x7pCnΟR,bw|b1rTQ:RYXytҡcfimD+)ڈ-`΄ځ]™/V(fSt1$r+ =B cF!}(9QJIE6%K4hL\ߡ7/IQw[AgR`ݽCfp}!I-3^ug%X,}WSktVr}ԲWahXe甁evޯ!  zoO4D {ك~I? Bt ׉We?n{/0(86HeC}Bs}Rnjl,` Xm:ds+D #uB0W'LCkt? -ʹn6YG:]ZӴ^u6' !DN"\C6O;u݁Y*Յ6IcmNW҇HI[NZq+i/Q-Nٗs'\QB<]-b}߮no9|w~ aI{Mخ.'3#p. ^Kp eb-_'M :mvϫ]USM(ݫP%>{I{?;Ȁ2 \aS':]fUsW͡^5U3/ To&qLXaHX"[Lr#"Ml_dgo$u۵co?[ƫSMؐa=-$Q.?ƀ}hԀc,|x1ã`> C1q2 A#H"> Ԉ=PK,휳+]K)^, څ,ͱ81Bqg M/ٻqdWT~:sPËZSnhCZ0%`B;z3!#) :[lqޣwNϧO+9oqEdRmK1OG bD>>G.)p]˺cO@l@ơu. N$>wȒCm}H9׃ʠ0 _9$_|_#|c|;"aC{^##8B*Ue-hh58{,eUZVx72D Po<,OM[&Ԑ+S~0_ה r$S ̙f 'PH"Q'V00qh_Rb0)W8 9JsFG߷6k Q-,ˡ ~U13t)LZ{̚[Y忄>G4rI84I}|wzE_]ΰ d5ȤlKFjaw@ګ  /]wF"9'DZu֝]vk"Rwv~'rG]g~mF/џk(@aw(NST4$AVrR?;h2މz/mM^[O9hT$OT 9d@ =춛U`3Ԑ1@,z6dۥVK~M%OZ T{-+B  {ҠlSU]|v3a{Z._u:ףV5铦˝[A/ ?@.(CxTɻ|t o^>d!NʋrRkk̪eH4Sna.ed;?Ԑ1̫0s#JLQBhF%I+.RP 2Ȯ;45=74%ܠ)P<@k:=pg2 ˸g{9_h>|2 u^qo:Kܸ?;œm|r?ufxP hA`s!#͢.!z2AumMt}"١QeFz/8Ol5ocb t9YCJE{wte/e?/ BpmTѕZ#i >Sv>dAjs:io !,DNHK@ވz!༣{)T!!R3O[tV[F舆 g_e^k V@ 3`/ϾWR׹BHz9΄fA]# ,u/.4u4|w]z${1Vys_ jʺVWPڰ[/o)lJkEVU|kF#N- 4oK{]^Y/,9,eImb_!DܢVjcTd %PJdD u9Q3U,p"LfRXRg d"7)!%RY=UL/Asl$F0v~jלfW=cX`(~}ccIwJ*:j5FK6 ٝY+ER_0I:5AB7VJӨXD ((tq1.X#fu~VQW3W*y^i%L 0%2)6JBkHI)49i ugT$ #v”p2^4fjn͖S9Rbj6wDbro2Yz$p埬ĉܚOŭiYrJUiIw,0. a__RG%`DݨϙUyrævGc"?AC%(|/ %yKo+zXNdSgGj6wyqMSCh3y9i5fٟ\On+Z蒧gkyWkp%ڎ1½\/̲z(Up F;vˋSLkW֫֓Mͯ2O~\FGv,VZ{֪Y\/|*XwfQ&J['DPNh(s {'Zi,_άK;)ۋ1v"*|%@J~ dr~wPJlF)``Xb2Ojƪ"J}cܘ›|hɫ_ͮUj]ع8D^`)rRN v0-^*?Dycl3A8lΠK'V=]sh$\0Hee^^SFg2&'x!d7t6 wp$x)b+ xs ;T{ vŪ.y݁%O"Z7 d܈ˎ]@BNfzp:*@q/db@{:(IjSFC@5"#a㑳JG7s_.͟A0ZٖEsɫQ.n{[qypgWZW:Gu9/]W1͙g»< {ɿ{m .%Ժ["0RiaK3-@Zd ٿh  (jR\C2R[IZ'IcQ,0s"5d;bXdO ΊeXnˋMˋ)u@Rgq)B9Xg0X]L+W l?JId8dN?[޽_*XfӁSL-vI޸+ٗ7{_g1@~NYb7OOylo% #ۏ??%3)F(?,^*3 ͝} l%`߀_]Lr1p[qH)GRO ]nLJ^8"˯izׂ' jZۃg%v+aեs;m|+/ +iM`BH:+A#ș ]{D+$b@ys7AxfUQ q2  qiS(3`L Uܨz2qJqJ)DMz1bj>PIaUarZJFE8I+ xS=u8seiOds|0MSA@cY O 0!DpA WkVj;20 @T`c;' i9tgpQw} wV f3Tu 1Vj?$t.B$@JC;%1VAgQ M-UYT?(Ns@ik:S߱몌H,!.u1CvsBݪ[t|vM0 ("v'Ji(&1T[E[Nf HEq&=ٓSȱp'9 n\KH{*r "W֣( }gQEƔ"2R!7"X.f 2h4eFCB? Sh;;{4EB~]YVLRZI+l-3@SiJ3qJx+ K@]V`[/=2G#GDv0ԝ#Dͮv {C/0M0%g` upjYv/ vUCS YiۨM!ۊwڽ3CDJD LD7W:DWЁE5&LjrӂFR,"S^UTcĚj_ <·d>>\A& UHR 2u-s,-=X;>mvYIx4T==Fe,d˄)jZWy5[BP{OVLxO|s}N6eYۺU1uS1Š4dHeunZ5Jù2竻t^Х!Ov0 K^:s4c9،u3YwKIPHc/J2|P"HP S,0`XnŌr$ IU%Y3 ,T%/F;mfnǮYC"v~b9|BD)T QN>]'FSK "BR8g F%5 )i@q%535 `Zk !٨c_?yn\7 1SjװiIh1aUR RՍA -;GCyIJj^D iY#bn!Rpq· \/)FBJw5re8ܨKv&:E0pYQp^O5-cb ds]E2OdWPd7έ:뢞"cGFQn(V)-lBZJH 1Y"/H6NJ ]ԋ^?)PV#N Ɛeg?B`lH%kV pz}9M9٭*z?GAĦV/8ꩬ/]W֞&){T`R W!9Ư9w{;jhFw3n7}|O}Vsb2u-++kR֚sgWWnz= ߣ~5U]%mVX׻ !,C+۠݃}oo-rU|4!Ǒ̯j> O_o .݀_j=~7\J^ܽLE9"8V#_˃Zy=B-lޯ4}8dMHp?~X☜Tɯ!JK X"udž]0+_6aXs[pϦT\G/g1T!q>K.tykSoOk]C2A6,lPOHx;L@e^z?mH݅="r6f QZ -oZm7 ,gSa(fw *ueլLժEՎ9XnQ|f[#bϞ4[?cXhFX>}S}t0k߶x<ۇ+Qj~;l1n*u/a}aO켘Zpjb%:~iW<4Z]/gWRq<[}A 9 =A$6t],4\H\lbˌt>(Ǯd\3y~;d8lfS+jYT@Im2aW׃ͷ$ [q'+1>|ۆ>p 7nF<:{x|Rml-Lwn\G&Дm]&'#.gOqڏ&B$8Z|v6CV"w+C$SΒafi|ij5S<_Ĥagq8e)j Ny  $/6EڍY0(9{ɒ]JmT$씷) Lѻoo ƾ.*:%G#n aj?%|9-qYS5%JeD()""MYO r +!XxCYыS" S|jO(=ާs1 XW 'jk޴y`1)чJ:MB|/jm6)~3 4p /?Џ!5͎ Hǔ$}=jW@$b1cnCHn.4[2;U=4D?Bű+1,b6yv+U[QLb"B9Ut!FVP1mE)mi6ju^z[3>?]w? 2YH{a;cx2kg./Mv1HݖM}1E3VZ07oĽ|XpNrdj1CiAr|#$aI>!LUղ*6g:N,f`:DŚur(8&(vb0 -x*3ц-W8 (D ̸kdpZ|3 FTdX7"76ʣjXhZ@6rSKÄc-d]'xs%3rUPIoL}G+'X˧h$`!C+AQVw/1E ..!x>%#{eɓ;ŒLn_NB1 kՖz.ܩEHwjAGyғPO4j5_`%PUEj'8UωTfaqz8+6 _>MG s\r;Я%JIR>K0C_T (X'َllѕ+ Sw۴1"e S"IVLx8VLoƋsT! 'X3xopj!.bYxn?.*s &zkQKŮs1$rтXdW%x0F\tWda`i>xvmMuh:Fl2_p`^'Wџ_5e~-j:*j'fdv~QُKIqz$IMWRPj5Hhnjš 5 1%lI d4ŖyJn5q]L&V᣶|囦AD׃_fy: 2~ZA|6+?sfw3pv7i`}\a|{bb%??xq83iϿ 5҈ BAP1DҺֺqra$"BaV)? ̷cmBX H"RVoOnJL^y1m?+Cx;aŤS*&= Xeo狆(Vׂ*cV2ڈxZFRԨR " Ⱥ(}G/=,^j ^"O^+nƱZ=5sF{tDC'g\&8XNӄ`+(%L̛X˥]b4TTdhuU7H e!/]>cou JHn]*|^l1bǫ@1EN$6Z족p>hxc'<&8o6ʔ]t?*tGVA%v߃'K)~}/@ Hur'7ݣ  y>Ŋ濤0z\Vp[q2)Q/ (n?.!D<ڇ=W}i$ J/P: h&jWr+Ch޷BŬׯACzP̕O^ IHg(/K};O:t!H^ҷTC06YM&&k4z8sob}ȧxNJ~/ߚ[[1dλAŘ$tɨț𪪝1RutͭQbL+f$m=3.-=D>s{{1t:azcCuz4CVgCH% x7'O7n}q`qm4b$Gk&pu 0gVHdr\)`i+F`Udd*E%&Q.Jr@a%.S9dAJ#x쬵A@@)A<\K{#(.d"%@ #0KQ E Jy-npfKHPdo4z>V %aM[~9e8OM[?fdT6ΞBF09՛lw>hGiR 6ˌ{QXJI*"k\5 JPV#ekSb/;nj{;xG ab% cA@x3 k!HF4d3^L+q CiPnN)Dt{^K[Wk|0 /#B+[3L/1y ~5[=n"q^ X=OV琔EOfe,E#QEf_@C4cQvUSөMr "ӡ6LZS6a0|*Q?{WȑJ/3;3an5`_ l(m}"T,X*e,2"ȌC]b\<9lM = ?ȋ(DP^s[**ҰcsĉCP۸L⚓ؤmN,}WɡOw1@7|(h~JvvY8v<)L~$jQA~SIrxi<ϢzgqRrpvP ,O /wP)bػXH+tʚ>O>iNmy'>υUo$!jn`g gcf7< ::I͹;vzMz;I-%]Ibr ?7՝LqSBʐÇvf:pVz8IhysɡY5!:)+))\$Ui>±jY~Q6vcυDvf@^>khb> :+}3.l"y5D콣ENIuBr h~);|]Apxo<94L5%L{ާEj>3#Rtv#so Պz -[}uG88l>N`Pկ1āH(g$ Խ5tnm>!բV ۻ=7nL>0te\ߕA+*:(A}aq$ 2$Y.Fk%~x #T҃S%jaPw?pq1~OŸWCN-U3ϴEWw׏n)*ɝfi;}|xY̺Iė^$Q24ad} #7lрK\Tt^D ELHx4iu4 "=UPgRg䯳tW\v1rbѶn C Ezo1uxvQCΟ<K]Z0Jhn(׍1\7fTSj(''%Ki'hL&H)V RAL ޘvԊv|֔ }u[m(#y}=߾g ^LMyKvp֨_c~Oڙ'S:RyBd$$]f gv'o2MQjAKL7\dL/LJ@)tHI@bqӄ~@\[#̡H Iʹ(4L&|BθVI#{J{K̖B-2[ eRhā-uk D95{,.Jר%H&,%lWө !s3:݄ A\f1q$b =*"Q6$AQ>"*5OV6Szj\{N C92<+3(&-!4%"qyzF.'d!2y@qZ4MX 9@slp^PtrKZA&lky4$PJMQVƠJOxnQmה ' roH$޲Ԃ ц\HQe݌r+i4\@~*&TE#GJs1$Zyk;fùji8* -4p&uh,n2Q GKվ]\CP2$W ^3(&vj|{ڭ)rsbG  pwuMikQeB3lSC-)̽yO5m"TZ!)n_똎W#cp=>АVçտ|9f:3d=W4RS?}p+H'TF8j%щ_SM$LHPucAIYVHOD#eD=(ŚQD=Fn2pn.2+q ͥ8dTHrz95*,.cMk-}r_ߎ9U'ˢy4w8+3goP+ M{"?"'򈯢=2&iE⯼NH^JR̟f9`EדsTm_oj@+іĠTVD^D99G$sp|`wDbдJHjr_L|0y@c,>SD>ӻ*Q=LXC3* 4P5I@^鸼FɶqH}!2(K/ esPG-UtA' :&F} -0ps?z15_gx2a, " f= 7fHֳ?l-> fNxTnc>﫹+BрuǾ l3|xD!-lŊlϠZJ{:dzCa'U#JVouêjCQQ z|s/Kp޿ƇQ/xAqt $hk'RZB$@h6dk8jQMM'"8- ܗf`KSKB ė)Ede|.G l`6YB(Dq6#I@4=L. C")K}0~jjxڰe܋hAf/ÿ<(vH ,`4We hSb%[P!F4c{X8? TmdC`QOExxQT 1H!8udoC?-i-i(XTڀz^ǨL.⠜0#S!QX6\z9i  /=5ZӰ ݑrFL;ie@ ŸԞ[`1Xc!G#4٘DckqR6D5HCU%B%ĘJ)ebhsҊ4{\Tu OzbIOEֱ[" { Ih~V$pGFvEm|(mۢlM6,dvL-g}յ ~nC5 ]YT塣Q`:D5C-hr- RRq/md5E]'+LRbU7 2RsL #JqilsڮkSq*<ĐJX1o}nc= :,2əee'L6iB.7@_h,v6KzBLϞD=}˸Aȱ4~QoV0P *ton0_f7̦1\كs[.n~m5R /o'(@4~q+sU. vCa~"+)׏yf2pOJ?3Ӌdcv곶׻̀m~%yR\9HrSRB)ᖗňJڪ\iPzՎF5i~ѨWKfY1a{Ե@,&'lΆ>N(CЁwz}^/ 6TZd{y-`(nhn]FhgЂ_vUcW۩ρ^7GEBg{u'$-Y/R t׊!6quq9kDĜJAD:Yʐx )Dh{6Z5KuѨF@ml>T#p֨E2*/N@ ف^,7=K;A?ϗ#pw(}yu^QF}rHzn|^W LJfR9(/A|2n^z F/, ?" \[gp/Y!Q[r)A֨J.4/*n= {q̧zG.Cy+, k >R#'ljv#'q/eB:mIp&pޡ&,|h1YnRE aHmK4&c];*c!]VӠPN|ǔm J8Bj(M ˦%5%+9z\- L5Nyp A &9Q n)摶Rs2BC!!B"+SNQ); J4::"57Ns`L yVԚ?fE˫Sȷ hvak0QRDIHԲhXTl:j4˗Yko SĭށuC.u#Sl8/t g+4*it -c ;s-ɥsB$KZ1߹QrgU_摓 AicIG*-* N*ņ>"'dR-rҤ푓 ?~n>7js!|nκx)FAyPfLwn¤"KEGKFqRd]s'2طA @kňV|!A G|7I&'h;A6<%ЬU;-\nhm`sx`0:Y;3bn2gTcx9kQhԠ]Q+/=sPI+\*ؠPs~DI20dy#W(wwwFN攻ܡ4j1/B ŏmV 9lMLO|S{;&\O{'k?؛"!UVرX{/f>i$vlzD$3F^2HɎ8痢RS \F\|=(g2ʞ({&왌ɢ?_`!A.0%s9 &baX%jo?ټ 8#jYێΊym>;\UoU}\TmZw;XR [OKå^-꥙^ʎe);RvTufԺ(@C'?RJsҒč>1E 84fb涝?URET٧* 9ͤK@Vݕ6rl׿BS T  }A3E/)tK셒 ÖVuչRucB)cRz+\ԄLY;U@bwUtԒx J*y/~j/~(mk7 驠)M=ItjWrsLcԚ( gؓȠeGZc{eD}bs:FބKX8': FL+! ؜UT(rbvB[)ߐTK\| G'9eR`$)hbZImQ :biߒ^, BLBi)s$]OBD?I^9~:o>S )'蝔I+/ mY.pla~*pҺ.Kl5*mNmwВÿK5t!,o p73IoIӀTTD0;Y0y"}+l+TjcycGJx:"۬BB 䈄 23A)a/|ѦnHa}ڒW*s+X*V1ֲ5C[5&!Rw`?CV3*3 n(g51ʶ|Wrw*:XF(̕-7ZRB*M%wRۆ4,*g%4}kpLXlv/%`gvWv3EDYCy2ў(2lK~*,%.? kp/qކZB*`m9ۊ1a [-ux-QԨIW hBҙ#m^TJ:%HR ܆-9UrIfecj:JifX&^^r ðf+3ץ^mea3Zn$Yb8X29SAPBs mcrW>(ݚOgX=}96$?\/fa:"A_rͤHy~I=<9lru\׌"pL-e.V=:Չ*L|XySi"Yd׳dU"k6R b_Ηr~19 N6] @O$rl]zFUNųZ<;|7̳MY5*uMdNt^eY%o#^)\$)zeȄ;ʱkJ((OPVԄpHі[fpe{Q~qJ#BZP2_ޔLSZ],ߟ&?5%4ݽ=.SJa9)`rb1_~_Mkyن мӀ9z[<B:X ݿWzw|20?6jadS/GmNT&L=5. 氷6]KHL^QK@O4).*:oԩLRY/xZ{*QX1[^5JA2¸*:z H,n8@ӽXXlذ=W /g_˛p+2_ <2$dn&ϼg}ݢ<^穿[˧Gx|R xsc4 B[L^ͨ:I dI24MT5h+GtTg7\h)LhҺ<~vQT?dzb$|}vݞPQ[h{wNfxCmdRɮw6lYdJvNKlm8q4zk[*r H/ݗ4Vw*-A|f*~Ssh0*T8(%K=UfkǪ%VHiTwliĵ->F~bI J^ mNJ%%t@&Z[l*wgBzOݹ9GHIUV^Vﹶon %?4m0Vf76|07'VʷdpuVH\}ٞAz '7w%x 7g+PnsC6}~=ͷ݇B*ldNTweyFX׼CV\M[_C Mw!?^MsZLg*45an?gڰ'3`2?U H"Rfr}{usa̓8m-0|l$7>;z%^a#mGfjM*{:24QDTKfoI KnQi9`ȡ[{ΎO7ߧجMi uk6`}?XbE ]F|K^2}yh~rݡ_4Rus|35xɐ3Gm~B_4 /&tt3#A /7S=3s;^0n tӤ)ԈKs>5}Zz6~yނ<ts\fn8再#ܼey˙Ls5VP3fټqkBَtƍhV Yҧql)psys)y0&~Yy[>)9S_pB2]>rPǟJ0KXO["z AhU<8nl}\ :\T7q=֋VpT G^-`(IiRۑTHsPH.WԋL!%ɲc$2r2&"Zk3 S&<% Wzsyqk.J&ȾF/_7*/y,o""A8)?"3/~4/Vx +'B73tXD(tO 36$H]dR8.̗ʽSp~'az:h5Qc>+m".5' )}΂ B1 4=3nRL6D1P a)t3*,_ c[qK6&iF2bv|BDY%qIo$KƉ͌}Zc>[OFw.we}J F:)Npd-dͭ^)ߴ;xXZT\U.f2 u}^C.儮S GY=D[u @} + ZUu]DS*VaA4'L.K鑢Ů삅&Me*X軑 FqVSBn7kp;wB*.5KKp(T a%aC,*h8Jſ 7LKsտ/?88pk0#,o; -B$fWd ׁste6sMp(HTueP1G,pw {[>Bi)kSQk& ½;j zhpn,/?> Vy eSQn6=t$;(M(ilA.6nv}~)C>?raB7vG7q"Bo=oFvg(ZZrrQLŻ5(,d =[ˌܱdE.n %,Y)({(r@N!m4YFvDX;#la,80C1b7?`C?ڇ?HLO[8,@*|Y,o1C%^oGx0(80QΜKdljYS;L1Z{`OUERq,2IcΐC"K %o7?Kj4EZPޕEhpa _d1<+IZ^:WYBڅ;exIEj ujLIÉdop^F$u3rCˑrVrT7fNxʭ" B44T)zg2ELF9l4\!.;`pWs} 9w%~xA. 77Rboem؊L -\TD-KO:a,Ӑ3/ư\4vWiʲ!9JY "-R$XVuYb4{ cj$kA{x\|KhVW^[Α"iwӱ\$.ׄRt:VI(nZ*%)"4 KApHҖ'ENctF_=g,9S9$cÐJJś;,;E[&$ʰ:řLb;*38Es2E}jTeyP޻êhś3j;Q7e>vXk -@?_<e)?UP) :&0MIw)1;Rn\i6uV-Q^@NtyteWW SC*5}ͧE1+cB$C:Frbi%7|;E.0]`$wE&YFD5? Lu+MOukSH\I,%z_҄0$ (< FՅڒ邤zGTn4u >-źV èژHjK"byj[,2,[ GB!RBt0M6UQU@|;mf.H!S+$wl0_*Zv膤rք:j/~+J1Q-h`CȹѓĨs$s-4>lDDeD$Ƽ|'sz|u$?FEV(ٲ * /[Wk NsZ؈V->E}Ekq\q0v~ӆՐƅj03 )T&X939BKޙ9X2`$yk.ÚOת_Mh*e܈_2S>ĻTKp3_UTFb8gʌ.U+#Dkm9FlRT qyoeI@}--f#3?7 Ku&1 guaT<# +c¯y?_QD*R@sP:EIC3&BӎQOi9](:reL, S4TxCѐZNPDF"+F9M4Z \Fi ` `.5FkeSZj*ﭩsfOnOrO;̼IsLiTP LȠפYay"eç[tlYPmB98)[5(ƹ !Ɍz8*PX&7+ɺ#J!uL4ƮhRp>$j?Zめ#:}ue]!?RX9P|U17T|Gnf*牕CݒYnT7K_kj}p"d:yurfےe_>_۵)˞Pcrbf%iÖ2aCq9@+g49Ϙcm$;}RHJ?)\ {_SxȻ[pv|?{,4Å`sLd&m#\fu4NG 8嚿Ʃ @}` $ǟ;'o9'7;:bBtN$^ "o{dhwbb<$iw>0_+|Ӡuqy1? fwC<*8W/#}Rvv, Jzv޺8HVSnNpjJ\1gsͽiGw[F${ 2pP.w }w.x/-en˾ Gna"^z{Gl]wx#Ӯ:Y&D^:-;zjeӪ.vœEoZW,vR i Bjђ:* 5D[DmZ>Ep lH * 5E[qеh=>* Z6cBhnUzT0oLRc&QE'΂7y3;o}TD}ZpBT^!#K"_nڷM"@Q9v[;[V=0[\B,-)nauSM'ۿ1 Jw# HxVshD"y'Bl\Lu k/oyUX|h-0.0Ow$_NdmZPa>٪|cR𹟇!= 1}[ |t?ypExM]YoG+AZ}1` ۙ $VD꒔ӤDDJ,z!,R"^Naܻ4 :BeG@RnǢ+|q40/t`ZB %G:z[!)|MeADҍY\ߢ6l5̯a8TmtuTW>.@D.|.`k\)ݗP>I%5_ /.P WP/ b>C0ua~%TZݛZ&uڍ5PQPU8 u.tw'>މ zkש_ٟnQ)ǓUSsJ;'rQ7$ԜV; ʏ͘T*\wLPS][ʨ>: LIJk?B@m큫֨09"W_:[]犤mʹ4+KoWhqr7+/ڃ_'_"DtbE~azfVF`" jl+%UUƋ||H(eIN>H$$k]'dDEPqF(̑>@L(R-ڬY;\ncy86p!EXk-vI T] --$ ȭ[F#gF:4j.N\.֒Yw !!DZئt]m:`LhZõ}ªsQ"Ik:b/~: GqD[݌;`Dn ,: %k~W ECo 7> ݔ&`Mw}9>nM&}C&@YC (^{AF O ߮sLf5Oc`X>vPlm!Pf8k8EM>Ob"M>zA}<`ʍ&U}\kp?*I*'fikd=?4Uc[4jgEC&Qpqk,Oim h*+d=zo ?$.U3aƛ(p5bMξxD3E_hzB6ӓl R> U'Y&>o~5ҽNe9J՟D[@'un'():ܡ甐E40pVxe&rt>r:9d&g,Vr-?^})\qNJ)|o)_-ͥW=c?7/| w +,Oqh YЖROVKJU^-㈙'e8{ |XPKj©dX^Ĭό&2JEtD9Au KF†^vߌ_܏Ւ Sn-V6h.'/xߓޜL۷'z_x33v0?&ْ?~K :aHbNyp4f|UrʩVry뻻l(c3CH Pso'-C\p0gYGV. ZC`@NERϪ%T]O{轤 F)@ cQ9UȀ)$n2QJ0" 0V; "pD9qW*Z[BLgZV)hmj+PcmTҶQ˘.<6\& 0XC`hnǭV 6%0mtoiX"GtQR=xjϓɥ~Ƿz{2'.:nWJIXn_+?ݮUnҐj j١ΫѻCLF'T0"FE/ܖH=6m|YټrNǓ2diWq,; $ ~.IaǿC&W'W?6}9)tz1taxu.s^eKo;Cr3tBHc5B" : <F)DD 0\k1^sZ0㪽2'9eC,8xRg`dD%(Ds+AFam%\轏-0Y-QK^. NƗh6,[喹u\,0x5x} |:x~0~Wy̓RpaÛERX$e~—t7Y` S0v˕2Z/դK!Ǜ]C@ާ鿷6Xb yc`Jp{[6֭ y"%SD̊ug-)m[wm%Zf[E4G8{j$݋4)m[w%[~cuCBfԮazuex-)m[>DYlBS[E4K(+&t/2\$N3nuTi=KhuABfԮn-vZ7"3A>ca%G]lBS[E4K89{jϾVMPTکC !}`%X&`$$+wejaNGê}Q}T&0d8Zhw*jЦ 5E}WCP&hIaeS;uj Nni[Z&PMn '\~gY۠ B'ETZLX}p-YPihtbI|4\5eO߯X9Q% JyDxrVͭ&KAF Ek nk8H+snXxL`:L`J~}5O>.'sv>l:huhn4o';O•)+?BPq" %p4kHf`G!#r!?B -pK5h"Q zi'hIjA UYx̐<r${dR)gHQ љیR 4Xd#ސ Lb},$DS)J"L: Q0 XுZCJ`Yp)1|Cy`JLbL$,{l& BC/x M PY 99)۠FB6+,,Jc ɽa߽yq%`We}/ `\mT+j"ߦoڞ @1 "< u9v K0ו>u~עozlku4L܀WOv k >h>ƺ2x~_!*'=ķ@di<i}?n'ޝ,P~O>.. !k7 #%VENd4yQ\n̳ B$g yfA_2.Qłɦ 3VuAIE\H!q^͐R6 t9Ry,_$C8v`e6h7@!-AQŃANki RViY#ݱZV~grn"E.Z6ʘYw*l&ԙ¥X ax|aPVI`Ŗ8"*)+ozE:6m 9{jUՔ. PU#Iph\S'ֈa P L`EAriN\ NF"Ga;{"^~ږ"EsW+cBv vAuw0UKrU%V RB! f),A#Bgnll,. ډp-#_ۥtݺU&|V*iO•Rj]czS/3{QWLBD5}f}SH3o_?7koWCT2| Ԅ-7FG|q={^Ӻ?W1wtz4?lkQwJ/pP(ݧG'(@&X%-n4FcZ.%(? SG}$bjuϧ:fH6AWj"(mB%1dVXa6,YbIO=([wiEtF4x( >mNV[c{~ ;Zix,#Q+R 3&Gu1 xg3k*.d|J`3VY+CGmkS |?-dt(˒2?GT+m!+fC/)N5^eT3%i3H \[͟BC?l|??$-&Skִ WC*!H%wA`Lf!F^Ic[C+޺cbFO ;4L?C]lbjZ_Ƚ Xد۸ p]}ryLڹ3n~X.&b=)er|tWz[ƫ|gYRCWE1 7dg&B .+^}yde՘2ѵbDIi^#;yIUϯ(expEA;&QdI!3+'$!YN%g,DPLn۽1"9]>n;6`a)awmw=Inwͤ*]vz8wƜaR$ʳ @{02Dh{:wܱݵkuXk. ڰ\'S﮵lB" Ł@zɷܛ 1e7 s QѪ4!xO\ 8{&sl;dڥ}h1Ԃz4 J`tNsw1/6^TNxأ?;uifg4r'Ra-(i"Otq5iS}d_?jֹ?'zm$j++AB+;֥֡'2HL8*D)),W w"wpܡ"NȥIz8š~moːZ}Y: ُY,Gj 98~ݭDYHWױ?%~Fu'UeJ6b:?׉ I q#rqV?x=@8:#Dو,jcNa<|6GS @1pf Mk~~X-QL#"-.ɀǫe3GC%(v~б\|S|xѼ%G6y?nL@Tn3U -{3SoNiY9m2+Mf&*X)k $AR\TGS^TDm9Y}BbLN3× ŽwKF$Cx 'A?]?6'%K 7 g4Xzźa) )^ d1#m @bR 4Q0tD%dd>|sH[d-Ma4iZi.SŘb2mr,m 宋c')Rnd٨elYv+4 ei Dd(6Š  hd&YCUq坑*$2v%vnN jZgKF`VdYvI'c $$hb3&L'T-弖ҋC'KA+5K܀$.H*ٱ[ypkI*I^NPqQw)^ on;e%#FBVlΫAzIg| Ԅݵlr чߍ$Z1o/]up`h 1F}ztb*@3=3S.4F*%Gή%) UZ?V+- gȈl0V•7Ǽ1 Y67^{4v,0Z , jr#C$,.+PgP&Z^M& 4)n" _#qBmy+ucZQAh2&k!1PNaCnDU̐cn\gDpc(^ša5Va`F&skވv͘Q.lC.@~ lkŰ|!L 64v)csO:Y)92+gBfLȬ τF DA(]ys(,:S])_KƲ)d>,JcًfSc_q9;?.#cߔ&JiҬ&JiҬ^T7YJmC!r&oMqi1rI@,zU']du}`WR8ԛu۽Wb2^N}fǑrS$(iv]hW$JTM5gd*O|a<ʐƜЂ6j+ {`>^P`u) 5/cvW"2>QQWMTЫ|R}{U}?6;isTƄ/eAOҾUWr\S| 2oن 4:G^q^%Z[2_fie!el:3J)T2p #嵈bi6˦ *7[l*4p6-JEðábnw]7`UK0-Mx s-!3$⁣R|MHd^${_&?h![٬~]I(X8)Í8ֳfd'T(Zs0yv~Y[xO|]Uyw~&N)gg ~.](UmB.h99K8D|7mhtGBI']b-6[+w _ۀ/~Ӿ 2 Ϣk}ri#6atFM!zD/A>:!%m;9nNzq Q=̿ 4y@q,^'+’O`o(.9cȷEAQIP!՟~rDۉ elrHO>2U{doj1=:#/=PZkc9)'2y?& q:'ja|4$y8몄FGV`:5f#ipV݄m2І[G9iyl8UJ9HUH7'Zd2lIz@kS6 _Z rpc0N>gb庁VD۽F]:h"K,X#E2 :qdE Ȥ HR(ϒ( ,53dbKOݨ!CZIQѐŤ &Ё$LF@M0ɒ4JoY"[n(A=5(g/aJ8BeE !,mrMr%S=$wsҀetկ)8 !50OpR"Y!ͦWih?ڛ2*Z{ԥؽY;v/ٖDPH6FARXcX19+ R]^^ !` l}RDz}cRxӈJ$SBwX!>:D> ki'@!; | đuON͘G\!|:WI#(`<'|3f rW2KP+e++r 6Ym`Z#"^[ w|ॆSG sݿPID2T}YT|)s GhȞ}a)$,IE_(׈ b °((m4|-R+3f+]odʧ" 0YTR/ZtgnnLj2J#2Crnj_zi{z00w~P%S2MSX1- ؾHYq!Ntk{@X) ӻbv;S+IN{ڛ:Dztn'h_+*f3f㉣3j2=]?ìFY9hrs-Ѳ0,w( a>^"Tla3`Vq$ yЉhdY62I"`ly[%15T AҚ$)yS6I=RWW>D]pk2m]CR;IKrܕm_|*xʐ(U1"e;`$S8qXĊچ^grw{f .R3wS#% @?m˫z=Mp)Qp(\CQȾ>/8kT)NeP]%=\v>ȶQ/ սNy2|pJ.QrK|TuWIcP; ,fIƘ2,GZGCSB tBL 7t[w|ޱa}g7yRU (t!8YT=f8C&ӛb2,;`$=^tX>*#&. .کp,i;@=\4Sގo/q. 5ЈVo{ɑYrxS\D Zic*[ <<9jqbygNS]iڳ`jvNn#cS᜕kZ1F【jIxkxacSՅdL7Yﵣ׎^;J{VM'8 y)[<:㤌$Ѫ'GeǛ4}Qx/ߔDa*۹RͱfPD44E4[?3)I9 hph>7k@wt`5o,= .[*lC]׳ld er0ؗ(*lMYJ#ܷ5ˍy@ذTXB$$Jml{PGS% !`x8i"2t;,ˍy-lE N_F13r9E-_}pK@VfLqѼ2nŏMnwzoϿ3xH# qOi|GdJTڶtuM[?e5z.P8MyTN92z?HKSw>,CGvL퉦I9xa;xs;d0Jw֟嗫M>k>ɗoJ O-S?NfJ+ jB:;St{מ5LɐKwq֡W"L_DY81x5k\>b)X]ѿ̟[WZ;8/Ҵ59-~'Q^iwʧ{7O-]D1yΌ+`N5<[=#M =Ժ)4t R =ǟ68T/=@6$\`.#h$421qbbc0N ` j<T0>kCE㒵WZ`}-f&^sOk{;^==˭(OJq^Š-xWe7`~] \T _hOo@&͞ߔ15R׃&`b^ Ga_$Vr{86d"8#u5 7\Q Y! F8fWw|?抷%!?3$G3l,]0lՠtނE4eٍuf,VӼ3YԜKA)(#P9N LZM=W?:2 |1T;[M0,^_>,=}X?,^{>c,`WT:L,Az%C~ gYd#]Ij;,eU8tSZatnVۄЛw?3ondw{/h̯$|c a;:u񦸽vր(5o|N̻;JHK){ W9y)C YveiEB2JbfϽe_˕HI쭸Z jEhPW6Ednla|=FU`ON7}p*`s(,hb;~N+]յH=!dG^졞 $sx_r.Nt[pѣ@'dF;]>Uڹe3j`v9 O u`bJ/.~"@IoIpWl#t% aP_&Y(((* y- H3!:$IMa0R( M_ԨHӗoGf"CJ6jI ¾_#srx;L!CRQ;c2}&f?Xjީ(:WJU}}NvuFV*;nnRic:{AQy S~`4N+m>W-=[Q _Ǐ[ QPV(`BH@!2;Fx6[+w{=*#QO n #35tT&=j2O0wI%gtb] ( ( ( *kR㔗y8phr\K@&pE0ceDFWڊk5:6}ѲM#]e{󀚣rMs@R_ h ~Y*E$Q ayPu#0Y)a;%$ifSjI&5CV\ CjL_+FuKZzN_HBjShqjolg3Y*U<"AEp7m"Iߡ9:^;ľf4u"@2do1u: &+ Ia|-B%3e nf0f|d+vl)(ƍUT+|Mf9K'vԜ u-T#nGmwa:3+T{AL7L[^-gZF"-֔0."d1dhMGR$9'IICBHȮuUWWUWW̬2AvQfлΓ/i؎#x7z㴳cZ>ecmɩ^yYgtʲ=nz0[MV>ρ/Aa!ɴ %@=@[;V]wQ\=@T)6&t5}(λZS^s4EF0̵Ҿx]r8uV+Uzr7n~3w.SK#!JNL„RخMX%Q.Z)0'LݽxLuuȮV!ʨ&d% {DH LH0BbI0eiF&2 T&1EģR3ɜvG@n%SVx,5j,U ov0'M@"Ja$=Sb#W+&R[KX :^(0RY\T1.De)L81= @BKKԚhNo8R8 ٯ9{2)CB`伦ɪe<"K,Ѱ8!!R4G`[DVPilSbvgVduJޫ_w;X =%Z-i\YXJTIl\&Iu5"\" J59I !aaFqHQ4g{j,)QKBqi/2?8~Lg}k tc+'G'_ O>q:>{xܜJ""KYڧbxzdDQMCAQmB2 * Wa襪NEHjPg]h:5Q uM rQ@w+Mi(̘t 0P W(F|ɲP\ie]HjhDAUL};ޮ',`tI(sqT+` 헓Ҳ`E|I"wra47`?{O%LtߜMra8s#Ww簞9~~N:f<]^_1tװBK@yvJq!ѹ*ib\nTH@SE06 l*wD5ZU_\/#GF?bU=_b]K&mP`|6Wl% ,~2X(=$Ql9dcq/ݒ}&q0[0ave4LWU- XdL$G8י2X8gw} A`Sz)KU;}1 :9̪ ilR`,MY*Uw[q W9jʔO``}^(]V+si3/=\G֕O#$]fDOqo l K租:0}X'h݊㧣o{0rR7:=~=M܇s:>¿)G^>ggo^?Q)?Ü:ӻ,ٛw?ۿl9>zc?F%􌿿]&w߽ @YdM.ȽR1W8?M9~{_Cs:Sߞolt3-3u{78\DdlAOS%rbj§+P\C}TMtw&@ppvDdr;XIӈ^t{܎ߺ&iӟ>~6`&i;x_z;z֢on$㌮ûw 9َ3NMzp>L9 ӷ(#}u<:(WC.Mt8|:uv0Ljw3}Қ_*U֚ddLL"wzbsvJrޛ^2ug+_w:S r1[i:4~PX-uJWb}D{r`G!\6wևd icx잚ɼ3ˇ9R]()ug&͕;' *z_lޒ;C5ۆ/lJSH*rS"95ѧWIҽ IfabÇ}΅LaJx:|#3(QH)鶝:o WZ+[o^1"jTX?Gܴ Ջ-5W/v)]/6qWz_j͡XK5BOTn]T-.!)OrjjS@ǝ~߹Fv\3zא 'Zbevlq/;V 5B|2~Rc@ "㵂8Bt/j5'6EmʫR8t| 4eHX2IdP5@e ߒ8h-aBYH)S1PSrcEb&讈MllV&+AHrX G0 $ k&EK1纞:㥞u$&~Kae}IyJDtLqbtkfP >T[D؈!Ä A.#F`)ZE4,׏\]ɺLh!)MGpÿtQ\!|t)]SQNqUYڲ==rUķiK$'b ( Ɠ?%ƭơG`J߸R֜7:weFн|5OwaR'HdLe3.[!;uaau؟ogGt\8 O׺ܠoP%-`fr3>"t9¤F98%i>kR`rJʍӫĵm$ Ć2`֠ Ph[fbrrp.1Av;#|zp݄av pU[MrCL0]sqk@I}̨FmǕ h[Ojc͙(l܅wwz$iBx{(\Ԯ^nOR o`R$ʨ 5ThWʴ/`.́w{<1%2*PhB#s/mY:LKcY3(weim[0f .(&wX[;ZAvWSK<㐅K5bbʅ/ihs3*zsP7yjT3a/4[$1\ժܳ b_(uP늬kMj\ZY/~5p.L %.bkٞ%_`N (5^oL~cK=S1#0xq̏={=)w's;eQ񥆬l, WK(5g԰K*Jyv1,kH pC  k3V#F9ad!s+>W>뇑JPCmBA2ɚ KJA#twc%v 9$}d)jQh"b- *ԃ/O3Z2V(L’Ŋ(PCXzaХ9|W Q@%h%{Rǔ܁FCj3U8;x-\0~K)o:?2Q1^ 9{DL8偀{Wcp3B5=~ᅟhrųz2\-*ZUV#DJb'٤ɥ0M*kY5oVt {vl/¨XX|tj|%`!&1/?Y+4E0ٌJg][|R}j@ ,MŬ H`5s)U&%cAjDm6Mc+W-ĨoZWf.pm`m*TWKX4[Xm/dZRua[ڗMLRlST#kfhN6[~k65h#tmnxۤ|bۭ2bR=Kj]GҨjXd)h+ ZqᐜZ˺@nFNWnBEQ};nFW#%SM&!6A'I݇TƞJB Zl =*Rq,"9*C+h |1C0^hTީ3ј'(ggj{8_]U"8 w ]ט,$N_5%ٔD3 )J4 2ꪧ`KcK /7>ns{Y~^[nE,\#08!Ow<ύ%}v?7NؿWG|b7_~,f|^A/1 b Cq9}:O'餞>,>~B<(rQHoI%as:ڧ`3Ig%@6En~y;ɑys~LOq좛5og~vyѠ%;h22A?Ngiy>2 "И( h:ڨ&/Js عjRF9fH|uECtr+Ywg{blAaСr#/Gµq4z6$P)MjoRxZƛ,#w̒*W^HF G:#d銷:fd{@wC657o=d jF+0` b ے⌖,=حp+c(sSFNRjshrG(K>].Cuz:DV7v2΃oC8ؠ!>6ZT[NK$t - h,}2w;ٻه`mB{xYˈ]^qvrBݻG:j;n&=FԵ[(AVb_q[QOTעFVZʖ<s 7efCBBXv|IZm]_owE'{}$;e (om],b7;tS%N;Hm]jwF Aѻ,3I oE^I6j)"{G FD [g{=*C%yZw$PrV؍ml3Ufncv/3b[ ]$QALZnm-Xˠ}#m=^i|+ٵv#2}qV| Ff; I# xLZ#4ɕ߻ |iqrq4`ԈTdR}]5-^Or^x"Ec]o|@n훙;FҾn} $Zm }eghT/b}lI֗trg_羌R7G2'_P'dջVB?Ѹb{U6B  #v_DYm E}/9ƙ'f̶}ыF6tNoS/XA&Y<[/Uz]?7O?374ǛɏyxDx:995yvؼuzvV)ۇC&;_Ӯ𾪤_Oë@:;.^+bJEB620 ]IectB0(L%l~is` BV@^D Dx tNڐBN\F(c *(L0IJ[\ Fz~;b-XحZkL9*VG1k+TV` H:TA&Hz%R_7컪 LZfְRl}Z7[81d0g;T8rۻ9c+jQWm;,Y*]}jIl`"[w\zS'x sP:(DvgQ-j?}dޟW_|NWV#9rk; ' Gjaݸed1dT?OJ;^Kߋc'O(F9(xɶx糕)2IJ@A׶xKj؜Gc44L|z'HrX_H+&xe|뭍nҲ%o,VAjbҿ\=E YSZF/V(&Y`ޖ?oTk? ?b6:}[RI휎))"Lxҙ4B@ * xUkG[Q6,[LZ-JC!1.t651@+* ج|lXC)쁕N-%l^Q'S,UV@%=l:sM8?Yiz1;ZNjhcMI\J%V >{P UH}PEtb!HLBM5-tt ƪX$J*"(ɤ!*IoN9#԰q66zSBFjUFb𦵉)VdSYLhuT@@hPy(|Nb:doΌ#v9uJ)]e]\?~xs\^\֋ݣtWRx}{*G៛oJ Y~]?/uf9ǜ UFo&hR&h_tx>=m ZIH^XvDb xJا0(۠$Ai~y;Wys~*ZE7>:QnqvyѠ3Kvlf w4 |}$;ed$/v" C O 7=8=8Yi\"t_*RD2J(6YHI!i I)hS ׆BNNvYάc .L*]T0taHl6ƢZf5 B(&qj iatr,c}]Vk BdWƉR*jP"|Ԗc b" ޱ(|Z=j+Ro/ `)%/L %/t$Gv;0- ApjE *x o8bA&˳1˓dvN|ty}HY/?dn~wP2#Ղo䛯o>>lށs1WLRH h?@52=]cxҤ f5-(s7ӓŻgAT@swRM$kUq0;V};4"tWe?Z˞a8tf8|kQRqeV BΜj}JgK}PȔǺr2ASD1Vrf*(g(B3Rۣ $#rkPRQ"cdEk%$/$aC=siEOz-cTI($'!;N HLHdB3ڻd`T\\jS(kB*OwSbۘRIXI> @h~eb3 gOb&g1i7_HXu0!aں"syH4z!D$~!A)oGʭF0m# %ĕS&b@uC57xn>S[B=ƙamפdE* ȮDBZ'e@p5>ĜMl[WrIwVsj$eQ u8NVT7:BfQy->3<̤{l I„zSǓ^u L][=4e 'bXU>Isۜ_ݛQ\)F1O/ssz9m#Y_vPsJ{,oNcd_r2P$hOH EBPBekm zfzzzgj>_?.%ecVOG\zjN7"LS/KoG <8_' KhUPl#{}N6V#qWn[n=}0^C?(Qj%T!you:`;9oK?<EE5'-xqR?$Ϧ`z'P{ӻ)]~-'t9qQdH n4*TE>uf>ADJR fX@֓Gɽ [XT@X&Fs7K} ¥~Ƈt{(2, Y)҆!({e7s/>7t }qԷ)G߼ܠע=䝈(hhkl횧cQ IT9XÑF΍ zTkζ=3%"N)Kw@[];'Rܔ-j=YoUe#ql 1ݘt0k.=BZgUdf%AsHʚsLlq R[u(N{(Wzle:Gxu4-5?3v[w2?L-aͨIMM>\]6GnpU{ ҏQֹ u|2y[}MKXޝ`k)^$X>Un=A=ly$yDP"iz-߀0nܚ*JH˞BrbN?:wM} aަp*"TS;Itn.j V{67}AX-rptr`'1Uͩqk.laquq'5l-7|EPm#."O&_ONjwW-~h~:d)4\~2oy{?^ތ&O :[Qs0^“xR5ӳ Ti\Ot#;m!uv$ӳmw,2|#1y] .(X7}aź֗Iks6\0bB$P$HqP$$q" JL,R\QQ&V ΰZS 8hJ&s6)-gIt9cABbZi/^w( ?{bZog"#E _d}j@>~~M=~f%"]~9t~QF?RL08XMFeP#M$ӂACk\Bor2W^~B2R쿎9qF;w>@{K+BY>ˮ?Ij@#Hi [2N80wӉ?[/zO&`,fAd.ʸb/`>SZ.WI0/ƛ.2|gR0Bntvyq"_*ŭ /KAF3oK+/rj U]‹5gݳmfj[SlX4G* }h0_DIk\^ y-X536R7.(R&Pn/_-]0ݤUhNM 15fINP) Dx<Ƒ`)#Y.Ix:ZZaRm\$(t.Q"gwC]U{櫛HEb02֑ &/}V3-viම# eHB.%' Dh"]jիWm\CB=( ؖdP~A=!Hoc/ï~mAD~~ Ǟ!gZ0}$^iqQsPXgv]Ɖb5%0ЈI U @bEpc,H([ӄH'4?\l=\&l Ex 7^>P Nhl_`b"y gaDIk0"Ik1 |1JLcs@xv=+[e̫(__F`'70+GfKoI: ƛיo~]1r!ȢnLĿ v>7t,%+S-/v}x~`S+Q3X{O wnY2b>:aӘ$:`Y3]9j2֯[@](^Ή{_)ZgoqXbVGo?(*T糒4\Py4#ހ/ȳow_<} ]F0ͥ> T U:W.7kWY$t**~[u]"/Srp1 Ǧ ^~z^_GS?JHF2 2 $AA%A zOLAac%{;O3l9Ҷۊ ]z{!vO{I&k /i֦ޚ\_G7+7leRS*er9&9^Y`ٕ qRdE/{!%К"Ek9sbahB1Pc)Q_bcB-4M0Jb\&b :s2ӦCh㓝JBt WF'2b1@ 7˘((kR,1D a?8jJTDqaU86Z*(jDm2AEPMDDua+Ms9Jfߐ!xZd88'ièUL ! ,lFBOcm+.=R!9zT*s>x;Zy;%ˈeHӿ&wo]> D@mjLWw:ǯ ӯÇqu|25Ug&Z1m֘:Z+Df}?7iw/oFՇ|~(9/ $Eu$ա酜~u TeK*׍agWB/[+krGh[7YsæiŞ#h >betm*Ӱ\ıvzT+zL5_o+4CD\nЍslaEif74]uMغe.ڪlu1*gC${Y;(`.n}*RJIbo s`l9 d`_4KaOvBw%]Cl񯂺ΙXZ60>"frHr?z*V?rj 9帢˓UmvA>?[^DMw_ %R.5̊\ Fج [aMﱼHf N45BQ׍82B'$a)S5Qb$?aH$,J6D)!l~07(V)!eSi,AxcBl},# Gڨ8$d %X>JÕ#j|ycWdo\r#K `sN<QdQ$@2ƜYp+DbHhn *p!thK;/#gmHrguĻ8$/dJ,߯zHJ×Ù$bUR|G wz|RDlR+BV/QI/'f,(oPAI )\ǍPMwJ, ȠX <(U'45+kY{J8.7RLF^Hl %oFZ\ Eu]`^% 7k'I,A M a,&)Q"RF ў՛#OH +"%50$V$4Gj=Ӊ)g]@ T3֓ynW)*Ve ʹbF$~^__wa.\u8bg9;?8Sȧߓu:6bv'P: *{,,q݌ SݖrA`fv@U<]ه <'0h-Cq*r _HQR&lLG̡Pd'a~(n!ޔFWli`KiNIIIIII<R%cEbHµ<%րX&A+nNuNI.~0H=-<͞-cz,~^J~r2ʘK Yؘ*n.)E\H!F#dqh.`D =8xj5*x!R~=c5L38!@~SN:t>_7e 8rst8'&ąؚͥ%!.>i ZbwKt4Pb 2_ç. ji{u)r=Te6KUnnTGkg Ed8eմ?WwдWy2)aԀux]_:38CywZIR WՇNUfKays a(-(coљ,JR+*Bb>jvΕvAR\WPg| 8 !op٧s 2@X%9Qrd QXcq+Rl[%kabr;iv<->\Ngx/feY#DRq<0Y_3:ԱO:XcV]e;Ӝ\wIs CsTgcUȮql7|(TaOuH`vp>дVrہ_dr9яdQWOWͺE=EkyT sc!4__2Q#mf`Tz3C9\wăp#I+fI2JkYnTO=^&lSN:Aq8JCI@n9ͱu'J))]5M]Cww? ZKCjlC:BLjLN[f 95o>T>ϺlW1>`b3pE=ڈJ`+A%()z܅jtP,d":/RL =#}򴍢[Ӝ+*-U[9Ke^˵uiu,; <9Y / '> w {a.zRC q6|>N(:j\JqY11y$QxI`D!РQ$k7BXÜv# C';bbkWDxP衚-~_(q$dD gT 0d펝f $VZb:Vi|<(jOCU޷\(>]LGܕB8(cض+ J=]mfAա\wX%cEkLz4'3c$2r T]}uЉ~!˽4ELjxI-/鱟р#ڇE+I$9nceUn~UYŬ9Mumt[״1,w5gC*o+/C2+G} |B+Tz1dC [abq wj(Ҏ5^9/Rk O#};x)͉_3BQ ?{cCP~Zz՜d0\ @?I8@=```FuR4 `u*@FE޺\h nbuZ@Jzw^| sрpъfpnV шgu _\O3Fsz*bs5-q"J&(8\GSi\eI۷2r/ȟKr0m˃/G+=n9Y/]􋜋QVߌGssuhmG?|^ګg >xNm}t4nƗ1jCc忄9в!uԊcjwnjAQ[ڟlQC(]c9e=/VU׭=n&g]2=W"oRELT0V& SHX2.&ho1[~?׆x_Voϵ \zxSL*XKcę$AA-PMC4)>1߲9 Ȓz_9+,(|)hD9s) '*u5k>19KK#z*E 9 3oY6/'gY j6/'7;|b3Rp6c@ -8'Ө6 lɑuR|NN}lKdn{;RRͷWf EL|lPo|ggoW|yE6ׅ՘jqT8Jq^pRPmr AA3THPӨ@VeI|3RHߺ٨mL{Lkk۳GڿuG3R2,q%Hb_0PN9IPH6D)1Իd'k#tdMDLe\1Dk"JJHEJ +x ]驯Ƞ0"͠SN AcZ`48 ('P(;0F3djXnR#U,(&;@*`09}S#'ΊB5]7ʌds],w@@@ngP+mTuNI9Q>nWExH͸fj`c@pKlq%JV4'!F-4@k%CJ#KӚ6)lWD')%DYNtvh2_ {5mc}7Mւ-.8hhE!wd,!jCUhVi=[@wۜ;{8Tx" '9"ګ 2](Jz ~N` O`ꕑ'^NQtJ277Øwo35lP @ ]R9G%#BwmWz vͩ˩=x@vyq0hM]f<߷IJM)VY ّZէ9u9vZio5/.Sk$l%hsI Rwc㦋ߵ)bkXoɫ~#|kUOX*BGdt{Hp\vT6 1%*00 ! vQwy4.^4p onJ!|Git7wf*?<4?5-VGTVkOk᜕<+ԍv/7o0@_Jłˑ.'?Q:3ꎦ?̢#~YՎ(Z*u離ͺX!{] KVHj]@DQBJ#. %UEq䥰?$0dbj&l81%ZJT)f~TJw~X75WJ׺pЀ0"8БD+K\i- ~Kᄅ)׎͞ CjK}G'(rW"fj-&kF'b @{qc0G`.( wIEUEqUѠ 2[UHAr_$xA|hS><;PW hTZ 9+P(5AF,>l//=R2>pR8z=98g $dD* L^]> *f&`1~n>w2|_ne^`n(r&pK/_ IV [a6\1íIr1Ӄ,FG0Z0@bDN,1=hßC{ԮA;ǫr W@ƠkDp8E&XE,w;|ZA 1=M3y8mB`h`yY"@JD["9bL kMVZ>eSԀBw;?G Q r64U8b`&*!Z%v:FSHT!(Rm㵒 ,1%iUIEY_zc0Gi&b8s(#5 ǎj#Bۇ)MʝUw#rW~zw3Z^Ό&w)MW{Wa=z5u3m&vѐ"zZ/AظokxeT O4.ͪul~2*pJ(&r[?)FA95//ӽw04+W,pzۺ Ob:hcݎJ5uhݺА\EstJRSNuEeb>)X\ RT'mviZj-{֭ UtCtc{)%:.v\1vi1y@(U[B -$󹯰_Jٟ l){bz cWUMvI䦠[}qόs>!$hՠ%MLj 2}7zD̉J% _`9e Fh8 ^+КQEh C@!'Jɉ">n6$B2bh@sfSwUK\>ј&Ǖ"nhLpL/Rg2kGwLu!18|>(k/6w/DZ4,h'x&o<'&6m\*Cd2i)X>`^f!B#eI4RK<֗1-a~+R?ײ "Ec~+M AKJkP!qaE8 Ɍ,l Ɣ1Ud<>,>R[8@ 9wҥ'(IZ ۣta=OԎ\w/uV(VH&}J!tgٳ9 ?&[A-d>SJynN؆G^مkMLbE~㟜;QR4Oͅ l=L慽5ό_| Bw]OߘQh֘nzĻi4Qӿi?mú8պ('҆],GpLk )T ';ީ&-= .Őgb%G~SL:T1rl$ojGI%pOL =;XZȎ Ĩ,Z=:f-d0UjV^tN9| _{@~lUl~t_Ng2r}Ä/ƒuyy Y# Rd_|s)9Brִ ~C֐,dM$ʭUHBmo\Aы]4̨s-h]` ̂U1&jco[ťہ 4+/]]0mYq6W6a~4H;k 2Ԝ >qyX7fw+f?Z9I/î4Q-*'a @wncF/Ɗ(TQS\bibI*-s_n 1NŧNj [G!L qb{-w.kC>t+C o:7 ]0Pԙz }. hHJ]O%e,5{n*d\ɸcNZS7K#8XWzWZ˵R(eRo3rY5g" AOT&I׺h=z)/uM8ф{$`Kp7}䛾ӃS,FW7}b$B_W w['캲٠6}_.vH]vfJT|J#]?i/9Wf.grgނ=LD}O~9 :FbE6=n{t8u&*JjYtLt127.nZ]_- Z1n7a[MݓwP$ Z"a)Nd{ W;|>-nv ~V߼,!NjX_a_qd_EBSxs5UE E&F"7U=?m&OfWݼ Y)W)G?ž(71q[bç "l]L#uh3+Mb( (_Ҙ;vzutת0)\9\$r|nH&$ Rajqg*g-{NҞ;ƾXSjpz!B:Q'eIR%,x[ C 5L=sʘ㪿*=݋_֝H.;m1 &I{li0](FꏟOwJ*\P EWEB񎝎ypk>2-|d_v+v$IxL\B0b~k`wٙ})4ԱADD΁!Olg*HkzxRA{>NfַB-% {AB%>\e-i%-m-ƓrC%_,1 'gŲ‚݆a۰nzþb9[,=K0"8]Wh'k؎`FlIz$Ν4Q%.STh-x4& lbmњkv##;^%xc}^ ;RP-D%Om!,}10YCpOАyK:Q_ a>ˍK8ܕM;iؗ:.S{Ws'Y`jCuhS7&z6c }L~~3&$KX+Eb ªwan+vVzk^_s@ Rg93ć٪ݨ1Aь (b}R9H֑^= Uo?"CiE!u 0'5R^ξ#=\:{1 y߷U$1G۠G(|"(9Z~S .kw1UJ:B\@+Q-W;pZ"e`[ X)܏=V  jp^ N !waĐ1îAE JYD%W#B½ HrIFV/՛ ^= |ȤUnfESlx7A+6\)Y e"+u:kK Aa?n&7r㟌-Æw =$9Z" a3k_ N 29Ev@()bK9:zC&j~L(rEչF4ۇ{;~6Ϫkg79gva!Mo,}vvrqpOWB#?=~Ty9n;tyx%_E& YZ L`דY=rl}"`)AvsEfgp"iLf?܆tb`BM̊u{F{WrIy 7+OJ:ZjyU6P`@A38('21In&$ZpCWc$_V/956*pR% c"*؎okWoV6Uh)e֨/IijGȓi#Z٤O\;*52 RرIЭH]QA3y&@DPDqWփfN23 ^L`&EcmVpqt<U)tf#OjÏ0(xpAWI2IJg (T1FX Q' f)8B$B3"b@v$ŃE k'ɡ$RVZI 4~;G喸LimlDY"9Vx;FX)| y(RfD-EBjmk`= ^GFQgAH℣w{f\#hha3,kfBU\ EgbHΫ5x!f|qbfQs%F?~>hpŽDP%d.S7@Fmfݎз/wǭ8l-aJVDS1((STqqMzP4o֜c1fx/T6 J4@ڞrz6bc[SWO_{/';"3oՄRM61es  |(ӀXT$JFwso-3y^b&/#r.rR#;b/CHCՊnB/~YGގJׅWD>|z{;ń`UO`n⒐/qt ]xAӟnKnqkȁL#-5KfI/oZqjYănsfy *`y4;v4Rsgmjᅴ|ٸj/CcV8KS=ẹG/ " Tc tx]'$߽EC5B7N(~D~@*?˭. 8G\0rLǑ2I(`6U²ful[t! ءhb+Nي6pL2r6)@O@bsQt)Ije㲚P{3ȓYY:p&רqyQhz=՟npYC+O>_&hig#(?4>fo3܋f71蛜{1?Yث;[~,A!n}> v6mȊԚ=KI-! o7 {4Ggsɞ8.Q##G)F"z`0E.r_ȋ ʜAiyuPq.RQȣ/%MHpeJP ct-KPaH1WfBhނ^ZqSg[E .V)1vd2N4B(u@SȬNQE6N>J"`W%4~Ϧ_) +)~:ݢT|e[/Sۈ.>6\oHRy \P.Li>WhU ‰Ԛ&NZ*}4 ǃ9cEx#(8),"#@+.|G8!4_Vq=Q}Ckd0'אPH8`@hπъ5bs)C!j=03 ]F9 @ŚC3FCk|qPPi-V9@"n}UIISK}]443uvr΍YSꗗ>orK _{b1Ty%Kizs}(e s:@Nښ\Bxhn?4g{Qar}ӋFeq??|s:M}z8y~:$^=SH,a{%QmCO=] ,:\V uvZ:y1y1gD]R[ZT?͔9ڞR&raJUctY;K**ڜ NSd6svzq5qW_dݫ,Vx0H=u>l MvݥlZkErPpygeb@ɍۧeL0dD&b,*eB1zIɢ. -&j~*^Zi^=\Rg9%O 𗙽Ob;_w 5q%^^wb(/Ǜk4h ?/';Z @gT>`%]w,df²ä2%h(Vפe~+w#4/&pF30t}ŁC@C>EAt/%mKȏhu?u7DKB#Csw=NZ.$m6lx_w_vtTAOtriGS¤*6p)<IrBRIbIHk\iGWZB5 ޹͢HS)qogfQ5 OKJQ($Ŵv Yn[a׻!jGV3" 9DKK-$ʥ6qpwyeRtZO _`V(!V AaU cɍaj_ɝI)E!O} (1Ǝ{F\hL uY JV*B٘Ii 8q}r"kMF;h(s @ g؃ N{`P] XL/J@TxK5O } %0\g$lƎxPm,6:THٽȪGuҮK"u w^^7897up%|"@eȘ?\]ӳM?7WbWeqVywaX3tXk:On&g 9V&"ޯ.B vYG2:7/[_rߖYb(4;֏ 4uHNNZo9N೽yS= `]x_ۯ d{V\8>O֣C]ZmF_Ug7NfM|#{>c xak­%3hŻ+'pQʁJ,N웚἟ c+IǯkFE8/Z^, U4#KZa4R2ې64u_UUsCU2Q8^Ǯ{(PFhh0J^Di^:IkQraYW]kF zɆcs#7==K84A%h`=Cd Er(&d/DEdp< 1+پAeM ɿ4V$ށ:1!ィ]Szj$Zfgq.?j/u22J֮ݭEܖT+jkm&4ݿQ!QL-okOfYAM߂jݶ+ma^VBMi߂ۍElΛf rP'Yޏ?Z$t!RHc٣F< ʴM\j(MJqPjTUgi|Zr&@Ba'v'kב hͲ^Rro"RJf1YdSf i4Sp BET!2o#袳,(4|l+&XG"8Ef5]vcY MVW턀Ѳ@d;45_>4ݬzXy5' oS&}rDBo6(iQID~X%pt(CPҷTGB$RqMSȺ95T$L g/tȟ_Ƞ/7_ux&;?ҝxMxN&KK#$rw'aH7(2xy4oja/?&鐭,Giyhƅ#)w9ڛg$P`Dr.1#@Y+Ȗ16|ns ^k#RYWI:qd,%ȩ%1 ҃"[yfp;\qUOgGǮp "&"BIyVex߂ipGx̠CUs#` gcLFlH(,I4V0A-cl<"0ptjR7ǽX+s|LK[`aMPK~>VGsv\<\vAyLVgW0Eի&cЍ.@_#u?PfYl:wHK)0ɼp QWcd-q#fsSZfUj9+s=ngf8Bsٚ%Z  GE1Oe6<>o#f%>uY̒{ \g]سn|mX^ks?beSl{b>#M4&KYG4Kzl9=)yvO+]-"TTs2B2(Ι'@W)ERaus\ g& pePpi.XX[7`6j@Sdd@stCg!Bc`P$^7*zusx{W,&yp,,KZ9Y<ԙپ:Y=pzgh^QLtb7bfly4 kQ7%k]g4>aO7G{젇PO9~j .*ydq5iEGkuۣBmږވy>XZ[ߞ Se4Y4x\^4u~5cX1? p ,XPySVfE_$TQ1j"J:>4ƨ0Bt; Tˆ8eBpcu7Ad\>?:⫍3휒-yed敊j9A9Ө띃n<8. d}*(@iZ̿ F`o3Yz$&D.ɖ1^<|m{LS5B(Z92PK/Tw_{{9dBbl(kn8ҵ z[M4{ SH4~OT`SqWw<:R\H-qҦ{6-cl,e2}aFz/b@E$̐03$xH>>8eLCu,b nxͤ8>Y .8d|0%k+ESphxUDL KS/4 2^{ޱvx'S 0|ͳsJB GAhL޴C~+v~%x$O_~CyTAIr..8RJ7U&3R҅% N~nҼE"{Kqun}6x+i {F녺)€z>F 뇩uNN<4+ۦ ËQ1P@Lխ(l+lwOl7K-:GF&#U*$У/w͘A[}, @ %tzMvr$d *![E]X#vbuw:cK0b!T,J<hh#3i 웲_x~xw&%7)>ϣ2{))UL5{2xѺ~Zpp?f"1 a0!͵6+^RhEٛsYH 6mYpNX9{dՀ C<iNrOt :А-clPb;Mc|vޣ^ (.2Ƿ"7n)aA1hW g._-+MFYfG_4^  MVw+؄Ԑg @p1&`UmVS&=`ȋJd>aޡPY({(31xSxB0 OjGX=-+?Ei@ ]9#Pky lWԟ24]FKx2&-2-cl`Ԣ}},\ {lev/R(LAGDӼܤqbU0.x΍^xEc]m=&)a,fږcS1dDZĆD'nB44FÃ-@kfQO쑧9S%cn|:wrsqM󼹀a$zW|uzG`-s^v xDSULKZ7(i()§\=ʞ"WXn-j0Hg$+b2FHN.N'}HdVe9181HR=P4#0 fA#JAx_ipG)Z3 5LF*8ǣǣT=#~w&##Va~gɖ16bEc"/bu> ȢQ\DzcdF)I˴C=&01^Ơcm\Ko+ZM{2}\ kDS=#R=d)C(Zaƴѫ 楬yM]f 6bsl,KSTgFDZv%9RjdZΑ*Tݰ9Q@?`HȊRThg̡UQpu+s>_E(ojzLS|w^U)j;|_2j2T3JQtTV?TK.^{X]Z2puonu{Z]ѷCa٫+ ]tg)g7u\nT}gFdOciM%$(-'eRQƉm_Aoy#O$2X۷dk 2B$'YPx.*nN?ؓl)1 ?-$86 ՜1FhT T{Cwk$~A5j&֜OWCLgKJw4N[xe9mXSV! .xHoz!}1xv; E‹ޯkCF, 3+QH!C%*R5|nڝTAhb-Z7X (yZ^6ȃ&Z=X_A|Jܯˌ՝ǭWD91_2I0* <$*ccY+*~H{%~Ob$ql.:it-$"cB=+ } #)Įf랞o< ĤV)U!N)Z&=8g\rŐ$ n -k+ƣxĈFv "X̤UmCmͽP sG& >'rNar '>%v%Xe gSPXx0F!16*,1vLx0!l]ڲ62gs+ld,%y,Ks!c^Y!T`2pj5\)ZgSb_:]}ʢ""nqEܬ[Ⱦ":Hѵ(I8q1+J3?ͦm*ס'VHtYeiL,GsW>*ƙFP745R}-D @Q^E2΂ֶ^ ;rn1ӣƙO-ﻸG%iVrmb?KYZҪV^(m[( ATj7ܵ*^"̱ g1*Lf5YL \m%" y쎹UMDR:<(&nXˤozM6cVݳϽjP7l"V^;4cvl}x=݂'Ut((chO1%` L0&''m{wn5F 0Hw˲`FgX J' N RX⛕?t*=h֋ᴿ?4+.?mSo3\*$r{8_r{<BtrG\s0x5xxju;Wo 3ׯu',|}Nt_:Ū翽.ywygS8]"}> 'Eq%J"I}plg_h Qθb3GfxtcLe<(tAeUuWǣ8.w==m^~m!חj&FA*Oe!T 64#$J)$[cwiy0v;a jenUΘ~M&%LWX\X)k4M\YĕM\ ꬼHmO7XjH %rvmR+⭤W !sb35"h1(%@CDD{!Zx CkoU.ST,f*m[YGZ Z6Uߴ@zcV&bc3 qM~ e \wa&Rڗa#0GD40j8VCr@N s5\Ƞ_ï' ϊD";ioE 7|]e6r5!ݏ!3cjGSRIގ?|# <81R!9ZȧKr p%sq6擗g8ELnZs^dZ52]v!9[(z 01,BQ~R,1\ "i5HG1*$CXM[4a=J@B+3E-1R D8  OqUHK%*7-Uc1B\u3%*k<9tƞl&$NeTH*4u& Hls"ds"#o7-$4}O%*j 4RAGw2(1dQH[HQ0=uS={{n.aཛ^,ξw8,RU\5gw)\ɽ,R#Jq ʻh62`w@T@ȼ\TyŶHx#XB%%c }J,XB;Ў%c XB% ϭ4UxnЄL+Z =rWk|10-|+/U1i˲5X0)IF~qԊ5,:N!SCL: 55EWQW#IEHv Rt[\7ZJgr5J;# *aoZ,Z`u8Bǿl];}{Y*MDeW{b1#S:gP%O=0yXe&ћ&z% eE٬Rzpi J9;{=y_-s$'FTw@Qqhu75u701ѥ#q6ԡ| ?GXa&1aPNwCRK̨pEi|z*ADJdQF9C0]CWJ>r9{ߕ8t%]CWЕ8_Ccn-or7A+/ox2\ܮkN#nu@;wźĺyc+H3<"#"` cd[ӢKdKeodSfKC,*#"#{YiJO,ΉCMGgf2pɠ֮c?k * dv LJ@fo!}ɲT F_qJ{ SI!4^r4AeEÉY6 n.8I~q5,YZ)n₎1;Fm; k⽨3Y}d}4eq6g[ b FݲO9TZd۱N! v޾}rV;+\(sp\ };:z7 :i7Hm@ #vϟ&nw9@;qY¶ط&S\ 3$ N}0!\Kę*fZ63om<x}A24 < yE2w $g=oelAmV )*%E |dB H@Lh5ߟiW3XRb+3}s R]j..5$T+SV!cWF!愭J͗gvX3`Fl3hq4 k6Ơa Ơa U?O`BhTEfM-$$aPP'! #p>t_"ޞu/{2C7ύ`y^kÂ]˜!3M%ԩAM[,_Y!鉋1i&ۂOt5s&rl&zG惋@1U[{3FJ~o6Xym 9]P޵6ZIW-B 8/>X64s+3P'J{J>P|WGg|^<?PexLdC6BJFi^ =`ϟmw`ގ1}qaYbM#n&%ՅTnX!)`I]|@6b >Cr2 g`?Bs̽2@/G&ej6T*VGQA2\jo,>KLk|| w1rSFfR+e܍PK={svAjЖEvΙSRʉQu͌e28#歉pmP4#iK/z^ʂ2ΠX }R,&eAA+T,f\Cg/)<~q7!uc*&RKܠh z 6܃U<!={={?h]j ,Ux13;c>5Jd,pM;(YR9ع<' Bt5׵lvڜy4ă}vAxtu<"DBV \"%@kThX cAjD>FwDh܃}\Wr&WwZ,m\w+x3XB.c,Y؂ u~Qf_\֌ &ǕhJ V[ jf/LX#Tu2!۱oƬ3qDwc1c1PĸPN(䵟a$u\JVL0/foPX>0VW:fFcyn̥T%F[V-o-k )⌳+Y AUY|[o7b>qc:sl\h2F19 hlk¦6N:qgU D)g:v=9MSQx?lϞ>T9#ށwn>VdT:8oWpW;ϦLG?8}9񡐦3*Cvt|]8}ٴzymgVXe?.VYZi[,XG΄d43uUoO>L{[]fxɲ"F Q~bS r5 -P蘎\@8kY} RR5%RoPmXPAm85 ;ڰ ހ ojz~pbJCTYU0]^*jTPqz5sS{ss>^s.rKHjװbbpܪ[pcB]Bqpڞ['t"WL2OOz }%Hr:wHR|w?=$8zcyvL`Βm ѡ"' *s2q?=N)# r_$@vnJ^v6mHne;%6mch@hYSoڒYYWb.[!c`!R8;j==8QFM޻NSXDFR'GAJ$rRX_A]eax@K5A3wgO`擙.~J=:ɗltH@۱Ƿ['7_m="_q*WjFYejV"r_[=]ȷ#g'8K>*N|&()Jʡ::l,)W%:ug`ϙwz?Fn^d?ܺbc>(2Lf1nݚw_zW=Y7].??Ϛ)+\}kpm }vl9Y1L&4bM5AXf^:gZi}ߑrP{#Sl N/\N=Ztݓ1(/?ܨ1J= ԛ7Mb,NAŪB.V`DovBPvc6ⅆWn\\Թ\p"5\'}_vןFaJBhoLdbRIrK^Iv(TEb&Jd\l ,uDI2\]eZ'dzh  ɕbYԫ#Ddɪc$EKI-(=-4gbF8 BE>lrnrʆ(8ܔI&'|݂oGmJ׉6+qj[  >4˂ԪkApaxւ|z 8>Y>ލ؞I>m\t}{T^W6Ѧ9ٓOO$R_~`:0`&w<=ƗgÇ[V[Tڨ~Ͼ0~Ácf L+pb[^.0{p~~`2Ii)Rj/E粁޼*GBp4?7z-~026?2`Лq@VbIj!ͯ!ʔ StUjL\E,b@#_tӚe ,"Kd`{Exǹwe=rH4dWI`c^< v_C>هJUY̪4!XGwbDP5yZc,.&TKx4}̴t'6afmHy-Mt 61#[xchF+J4L.` R_I)0W>I"آԝ$os43pLv(qW\/O b 8`$zPU&fF-HJ!`&ɩ{~O<-4dZ<m 1b!Sm)1t<Κ!/r%>wxQO&2e#2i :9~G'TXqBJa+c1\=* ].kc'hFmŰ4l:-%gmƓ/)[gAޥ&IkVf,b MAԼ(N[F~{e4rjA>;>y%$U?9ͺT īU]2E$/(KYΉȋgu˓n2m98'ϨLFؾ^e5,q=DF4va k2 ; ]OLrpMͪL˃wMhVe3VX'=lW,ndNYc֢df(Zۄ%4’j#Y_8h(|?p! oB)Õ6t~ ( Q7}i%.%}Ԧf/Dڼtگmˮkv14,@m'K 4q,$% SOo$ί`T.%ԭI2oAϞ\hr.FP䑗n,5Z+dbYK0C-!T'G,Hb}:@gt!vs -\" x>*Uے))Dcr(f31NXg3&.Y?PrC~"Tti]>,D$,)w\錴]k T. Śx<TVxI(ȤuBDP59AIܞA"W y'6kk0?.A$$.̂ 4v L^F$aPq>pK*VRZ@>)Qd%} j$&f!KLctT _DMN lY28+XA bN KCt!t$uh#1HՐ4=k0[Y=$- IHa[_""St{}M_Gz+* _EUvĀTo6Jz$<B"ē!BxUE *7KP f"x=@j┈yE~$;^ U@OKQ ِWUv)DG  8Pb(\IZn% K/p&&Q򁽃:g^TKwM Kwvw!{D%EܶSXB1 7;lvEhe8~dж݀e cģ-q4F;( *}7@J}a;t'qIINН jGD?Jw* ^(nئ[rH4h\޳wpѾov5.)WBS uOH $n\[lJP30I]M~$&r3ҏ3@qdIq-w;VX3-ɩݤ4@d/JC%š(dj)/0ڒd`դ{mԹ;"gp;2Ucm_3uKƋ\xR%CAE_ʐb6dcdyZ4"yoj@'rϝ;d@\Ge_Kt'R-.b&ww~~A$r'6@n-J䑷\ˊMLHLI OR { +Ln'q|GD;t)㎖`(Ш i7U})>6t17OJs4>F 5oᦈC)$9RRÙ/ňMB f^悘|Iڳ)54cKA^ya/chTi[CPJpW{PQ9zБiHufV52z{,ŠlVgoxbv&qe `ӳwhQ`(ag*}+$q%TW>jI(ۗ=2UٓܽT.uQ<5$JzSNF՘/Su fOM R%P"CC5V~( CY>$ JάLŽa.y(/1z,E$)X[+V1QS]"\FߚdTWZWuIۮUwHa}T/g-wh= -wOzCJ%91K 6-ڹRg8Fib"]ݣƞ`}Og7eY[tv(%N ' i霓0Y&)SAq<|%|~${i) 4eVCSz[sHc¬dB "vWx1}kyJ$6X,Kms@Ҹ-nn;u%!1Y/ 8CeWZb3.MtYHhԑ`V!vU%)s2Y⸧?񎆯iz}M^{DN)7mpo(B9˫qˏYy{Da}ֹ{/K=)dG$+ ۤS7rEnB=~pqT/OI#B?ؽW_.ϯVz6xmGg(=eg?tYփ9{^^B_D6dǩAv̤3ʭv 1EO wtUSRT7.qbA;Wʬ8wWoҜy㭽ݣ=k<$`H秧gwq*m֨o%0V.~\_3OnN-&~D`R"ߢ\]WG}8-򬐣ĥM䑰J:Nsr3Qv:|Uj\3O_=v|Q,{S~ iJ__8LS%#JNغ=wPŋkl;?p[on;[kqԾ6YhׂO~t9\xy8WVr8nY}]ώ. gGsy9{{׻Mަ(tIإfCy)|_,>Nw5-Q彯NSyku$_|_a~~VHU]M+!"YS 2&=*]1a5o*g3ώnG#辆h=;/9"w}CJ34Rv5݇Gc  Y`Oc#j|zz9k{dhsӷ΃ 􁜒qX:a|+tus.cuW0~Q tR{:EY Ny-tuox"A9'>K&d@,M |:P~$i߭gxgB#_sTRǟ9 4,fz Nt6%ţ[1ޙ!tYKz|_.#NH̖5(\\SVq&2uB)xGA%sEP9-#$#$wCCۮ:M"7B\cۦùUdj諚F [^żn(E J/ p1P" FɹmuVwy%cc L+Xta:ѰNUqH7/A~Z-uy!nrej4jJnU|zsdKwͳi6߽G-%'m'nR-h6&!`Fx*&(uU5kH}Qc$ fi&5(!+I-c&LTpkD{sywx-d"u[*(pD/Ц0g| ۝nw8rC@woRo.ۇ=1Ƈ'8-3W]$ɬc)DŽ!"2s(I%YD JsdF8 pQ,2(Jxv4Eѧ(ql{ Dɟ LPp*2WIU2Pٱp{(B,l; !ژH[G"ыH/"Q {7]PEi٤  އ~`gL$ÓiD~웢(Ǔ p"ĨH~PLrihI!G(/b¹-X^2eFZq{'@!V6ەh5%odR "S$"8B<|ɈP-Ff<:!-ewfw*y7+bݾ-9d3.!N1jQsxCx~Hs{uYq !nmE y[@=,D9䶈`lN"-r@d-{ӣvD4e=@ܢko-64^Zl#XQS񶧎X (nE hG7OJ;.h!sRvCluE% ʃyq8K9`|Br Z *ڊPcF!0iJ* l|˯'JyTr9z7~tƍ>?=~ad.Ǔ|4so㻻8u^,'N~Y¸A_tl`"Mo/d\^VXJ ܴ#8ۉ;C}c![9olaERQr\Z2υXzPC469 W"ٽ= Ñwi0`i5(YxJą#UpiiYcK$2GJd`񞇡 i/Se(C~An7x۸E%qt-?qmt`|M?7ˣ\Yy'= 緣HJ=7_e!~C,.hBIP/_͝ oIHDID x3W:A%. <=LǷ+0"ŸKa -Ur +V kIYj^XI}q1~90]76[y2ǨBs=haّ[f5G󯮲8c[{DBSܜ@q4#8¤:^QLx(iS' ACZ&0@qN5x2 W'swz\d_w`w?OKv47hJ磟ܣ(э{6nsSMFS~\~w@D?Wwz40F~z{=-,^ =8[!:'>\\DAq_E=K[`gv5/ů3b)n^סX0*V`!P{cɉA/ !nnc Bw0Jɠ-wAc_J0A2Y.Sڗ) m\4 Sx#c'P(am~0DDdl?0 h=IZik{VO;>{ͭyҰ۲FO٣D]e)ILy,XNk*Mbۥ=a1 uXC!y߭~[0mxSۂ(O ]R\51c]s9IuB*D!,'LK٠;s1IҹAUNͣO &A缣Cd/895>hGĝʜ!Fh#Y~3N,ذ!'3!B" ɭyf 5 9cE%l)KuBbN7jWC<>vJ>+0ϙ& H$ )+Z"D:ʦ-/~tqb KMMO/Eǯ?-jy0EU9pX2 6P58(8۫[+Q7sP[[nD~mF_&86%M^eJH^q`?nEbFP|35]L)EP{%x)aA0.=s4T+|8,a- CVꤓL!A4 jdqBۃ@26G);p˰Tj6W Gy{ܗ IS2BVjx<[a3yBw pͩ{R1o"&@䔹z^G*S<bJdKph-4ж&;51?2iXF 0fjqJ*O`ɉ \O;P2ȣ|ccoE.; ;V/3][,pJ)2ߓ~ RBDnHN݋J]~}h L^Sټ4w5>\7p.|b[0@ RD5"BZŐŖ5oUmʓ7f;ML!gD $vsDkp'ϱ=7Mo|:%:yB!7!W ĀpJ7)(5lZZzPk`O6t@C'1MjAnؤD\ɎӮX[{ *Mnkr $FVM**pW`4Rz] DI7}?UaWz\1 c>`8|}"ۇ!6{Dű^Dv"ظ/·M2"|11j f:-\+,-Odm%VڪRx]Y-F8}q%ra: ;ُol2Fu}l=AgGnZ83/Ϳʢ^ݻ%>@#tA\Zm8+"@+9| "J @Khb )Tʆm ŒpJ{A@3"emhrnP|&E(%% ۺ$MR=XhceRI kʎ ΰh*gaO.j 9ֲY,1B =oQ)2)%6|ꦕԔۗa+r RZ+h=ҢZ)pb&W/'5RPm\8k9DpbD9&9J'cXZ_geITVS܏ g_gZ۶_xt&ހf!7M{:'$i;wLRWDv,YDRk 0tЇ鬙Q+Vz.26.>SVRaa}"'i坊{r-/;N1T5#ܟP8#CQ͐CBH\4 _S&yTQd3G8(ZcXSwp|%\سVsyRsQW(E/^RKRb%'؉WRbAU XGYE4kg=%':3y``3/fQ"dTRrʐe6y]'!A׳V|;N~w&5~7ujG"h @eR?_?l):(oI5o.OXxjQ>KJ8檀@us%*2* iқ2m7[%I@3 {rIN+Cb9P>l2{EB_O6fף^.G5=`U?>^E*Cc*^ IQWx,~k(U*J[ȇTU~@!88s&iEUѥiЎ܄e>vT*FJ_جL(c T<[@=VNkXZl!l<-!cRSD0e~rh2U@|K a0kzGM ]v@T3~ñ{mVL)bԙU[.LƌA+W z;Ftx^j At |*[f=IX#wRh;vDxNvVQ) ]$ I͎!7LrryGRN3s"kܽgm+C!Ba4С3H/u4,ӯrh\>n.ry4Fr4Jkl8op싏:Cu+X.6o=r6B>Fd!&c׸_z)OjΞuAe2t"N҄AH1(CE„ץ퍗/c;ɝ=hri$XqRFM?*sݜ4= ziM3PnNjc)*ݲvSs{}VA4!2Ԃ9ЌXKWE7'!7lFQ6 /|DžE3V_:Ok,VobʈݦoᙆR!=V,abWJw0jڥ/XAhCS#Ż `a9Z̫ף2\M5D($J~ޥq5-li}*o.f $+3 5s{E ]#iELR+D8̕GkҘ/S䌲ȓJeQnKU܎wƓuBYo>:Cj'|Aju}p"“!)al"ڮʉo]܉g <Zt$LbYjGW$Uk% 4yÚ4v(EkvQ] ez$hKSQcM0o.[Cn󐰩>D=Pv0E1~ZƱ *L)KQQz#׺Al&Qjz!r.F%umv yɝmYxF{ZOy$ ~&vUM~ctŐcCWӟ[ęSnyCmg"ğѸZ1+*hW51+FJ0oqn`KA!L/`߁L/o$\fmGܵ]\w{s)fuM8sJ厨[@*כ5*m|U1 e MģXR]y-%xv&F<0hgFCK#Ht(1q5%:" YB[<0)FS-'u2u&OoNn/ňJN9xo<I dGb~TsLEC2ZĎ,Y25Sѓ:jI #Jɚ[sy'>huznbÚ4#7:+>2Z(FkڀEeQ5NM'%aUKMvm}փ{҃cay%M4pJX {{f~3`M塄H`Sl8M}ޛFv=TZYЕ8y{ap`v;QI{_4ey7p’󼁓ښPP}O^chAMx® p3!Heᾧ=Vu/g>.,x80^h\ncPFJD$$`.؇ݱ*0';JXOx+ e-.Meg-QXI3 hrD%q9=s9$Ү[#d2r!Ȋ:Is߽6iXd7im,j܌WgoyU;!4pŪDV.D%:l`&Gz(6>;4BKD_!81YHM.z'L_ntMR<}8 q-$|scO=5O7Os{a{A'd;Rv"H#TχKAG(8upĥULJ;TrU(xk ;~RӔ*t{特s S4e&}e^΄o${K߿xofl >O΃)|=gR5fJev. Ϝ;{>j~3'ݱC_wxq5{=>vĽM'mfސ9NKo/x<1j}'^r\?D5 7A{gF3O(=*JZo횀*.uëswlP2(9ko><K>q;HR/.05:[⚭ai2SMdlU^mI/gkpoG/p\xG[Ш] E`rfdҼM/_qNzpO8dOK^ox;OU份Cm?3%~BPr;m 2#/7& d턮B֛J|1g@g;-Z7%3۴*S.-.3,jgz|o6eKWgΞOD%TD'N2HH2BOfۀFrAX*\kWmymEP@l\-4F*1v6JӅ.>B>4Աђ4Z(V4) j,B_;Dj#12F„T*QO= " 3՘+eP$kt~7|+Rw0Vۡ@u4|;"뀘]AsޱzLJC^8V Yg%;9t!<4*Q3ܨF7*|!%*:?1'$8(F7:AΒq{/ sBVCv* m@+Pl|}R~kD~#NxF7"|.)^͈ܿ|c$3#U6ԃjPia%G!9"멊e*"?} ^.D~Vb?~6jmշ 8ѸuI*JmE8vY{jUQ<< #X|r/Y^vs0_\g+Uٲ+N/Qap(n忟 Io"3C+\, }CqiB $bBi)d^Ђ`fc LS1ZoӵZF6Dkmr֜s/D/Ji1{#+J"Y9`!&-Eq[O/߮J¬^ҵ8xuud65,L WdΩ9yѼ'^;-]T?W_ m,p&iL&}3 g۟>: GtQ$t|pUgKR&۪ ^mYJ6gÇh;'6˫߽-ңq/-Ȕv*+}o~z*2~.ndi +Ew2WN+)7B#Cqg'>ỳ0H|~Zgӆa: loR _c5/|. ̾Jn ΰz"c0NSB,A&6`-fr*Ia8\S# ىöJp qzE ,aWwj`2ю9Hx$x䊘 r0NT-io:Fu9 *1G17v9" r>R܈IJc\wrii1SIi-/Bf~){v z"R0[ z7~$Q^ ffoRS1q`y`"G+PTzfETkU Rk)+{7G) G]99ߘW/U@*priJ4bp!ʸ"P 5ౠ1\hQ*ɜP55A-+sykqWك~ a:AMŰC ,R s`qJ9;rv(P=gY6Ykf-ǼVTS DEE /LT (cXf3TH'5.X`2 '4+G"ͼR4gc]^4( J#ppm>b#Ak#Lk$3[8AշfLN*#.,v"3%9 b\ 큶h;k 1H+FOi:I$B]s_kPiBTEY\ 꽀fHay##9nAT%w D;Xh9Q-'ϗoW ⿮UծL77yM?ߧp& w5rULVڼҪJT>#y1)-^"*'c&XNJu3R#Kr"4a{KD|֒sEd`Q $B'O8: GsF:RIBL:Dlyaq捧1D<)"BN(+yG#wVШ EՀUSΓ 3N C3Pq;KyP P P P= |ڀGݧu!jPjhtq5:j,=m&JB\?0G1"M'+G ;&(F6܁آfS%b0K`Rj9GK b1\ pLPKS/fTx.d(6؁kBRZgAMdTXw6_ !:/PPuECd4Uk,ܧ-9éRRa@zp}ԓ 6Tov8$=` R!Ly4sxO>4Ta̴;M?AsĬ(Dh0َlbovc<=T+ f A\Jnt!IJ;NE2e@!E\  +BBeg-X`+9D"@LcDpN$vcu6`״?On f*"BI -S2=³Zg X:/]"` ˌ"H) }IRãn᪇iCym65Cvղt2i' 0݊K ``8"k 0fϪ ŹK$ 7 ï"UiE'vӰD)i+cbJ+˼ǝ!. n6ʇfл= XW~uC&Ha]P&ۊkζz!ˆ2CNiu5CC0>CIbkd]Ґq %4-davDt/^5Xǩ\E,Ul7~8>z3T|E6Wݞ^곴ǂp碕௞:񬬱0Z=uYR`w|ܐv ~Ot*d톅sRZ 䒞5+qWґӅS#dR.2[G:.E6(x+RN3MO˰=ݿh5. H1QYnI`1>U39c5 #q{Y Gψ1Dӭbcq1t>~+,٤闓6Vd3g 'ټbN2 z;qQib(yA>G8Z8?y[RO$_t3O~y%[YL֣@PtMܰ㺢cn/{j0[ǷM.70-Acc=CAzx@0lph3Fme=+ 䒈 o]f]"J'M,TםUVRc`N^}o]0yI5*954 y1|;Ow8x{~4~ETdo6L2 c'?:?Z} 빓k۟>:GtQ* LznZ|NKw4qd\^#-583 ikz<{}ZEuT6434i 5l])eĐur[P+ꫳ*"og^靌Q>m>FwUhLXw0q=`2ayb$CAB 3 D*z49!E9ou2DC '=g3EV$⽶u:*?&30]gn3B)&?ٙ#>!~G~4[)1n2n@vv5乀\b'u\{ytȅ0s{ѱ={sIYoJl9 gUE_.u_.x\ueo.g>_^oj)%?tJ1=fa6p:)q/`3S7ͪ$jUfdb/דIJN½M3R&.]R-Ӽ;!>kՊa 5fYiKOo?*IOF\wp9+75ʧD $y*ɚMF)C&YBf@1T<:2ܾ6 +zx=L@`][oG+P}.~8`q'ٗB_#beI!dEVIEr""Dz8n]]yUBrdJOq :Rcŝ:;m2=- fU9ozFt]zs_/$ 19 ޒ.=x ɠ pZ%DjRN߮o6fRt~d?oOΕ;p%0SXjj@ڜo\,uzXR#WSO qgYDVB希{KyM45on 5Ԍlw2Z:߀Q^ h2w჻@D2]FD\ktxL꙽eUcox6.Xzs/L~u_8At((Qd\@eu8xkP@Ka FÍNJ߫z;3poSaClClq L<*ۯ.zt K7+1E ;]Kn%~PLř Z95fʰl&ùU% Ej:R),~)li nj32 hL 1Ho"h7X Y&oxh%V>t|(A ] w,bin?M)@r+\?>ij4~o1MkNp)Qw 6D 6|ͦhU1&pt~[W*ѕuXF*7 q@t(( g8ק,EDR!:i82IUx/~I&p:fhKs`1Ե YaY)h0SF U ZsG? ѡE|BHC(2,,P@By&%"s*BO:zo109e1HQ. !r1-I{2m@)bVگ*'c>Rv<!nɾԲbJ4*h>I &൵ZgP%]Y4n^[ = v&phx5yrWBඒ֠`hIP.Di10deMpeD˜GJB`ZFvt(hN:&M/hߐ3] /$7pkn#ޮOٲz}6L]*s;{1 1m^ʙ$'nw%ɸgJo4UŶνY&˥dGa8"?Wt9UN5 0)Q \2FQhV%CUxͥ5nEcΉQlʆQbnbiތQVbzEAU:=]dL{=)wZ#5MGP]^6sl1jmeUn݈BkqDY2aw_߭hcVv/.dOܢxDy;vڲ֌+vhuߠH'B&>YW:IS)mNJ'ҡ=W w#PcPnk:΄ּVh J;2EO>(ȒMdմlSEdg3*D-rT1w=ַ} W92wReFp9l9Zh9JF"*"!\ v9KpDZhY6 dd 8eY5*'F~5p]piԋvBׂgݔmoal;x^bqd3&I အF;wA_gm)5-B`Jal)$޳8-1a2mr&ƒs74)Z z(W)|YoN_{qR- WǪjyh7z7d\H珅-}7 Kz>~Ø 3_X@{~ꗳ^8>Q˒~(n0Q*ux͕_޵ْwkv }_/oLgyw6`Ȟ#c3KIfkR#LQJ"11C[R 5'"lF"{kʜiR^,f$5̧,\{Yµ%\{Yµfx$ֹa֦l-CATD%eQx$.J3)daJܓD\5nE}Llf' K묎 @&|/׸2HO[4?o[|y.z@;ΪnĿlsFXy!C/UnhS;٠ b/ݒhjl?BǨ>zr8 y|PxOR;wX~zz,>=[hW\^Ǻi{_6y|f*gMFvvd Z5Eki8թ6{stzXk:t}ƞU: :=5k֒kWA<&%W' RNB 2iRf~D{}`#_Za5{w ٱm忘/wnE hPK[ߥ7EZaA M:DbȀYN &=7B4-r!y"72;94. 2 6&,I!WxEcު WcJid9xʛMfw\sy9K%dVHv-*%SDG6ʹt4dZdksQ3+d+,X{D9K6(by%T1 ZfX>LĔ%R⤲ہJ<SAXLNI&&ĕi-K'3XC8$Fd`jې[*n e%i#ǔX4'͠}9I*c&I*'d $١lk/%8+3m{"Ym_C.ab΀s蝗9ei}\YO!fuktovb黏o/Ksm@?hp1 >Y'=[RQo D -q&jbQ5 .!IS Z!l'T Q#iDT3FЮK@)5Im$`hQẠ⢵=[RtTU8i$Xʦ^۟\LBekbE$ jHlgՊuھF WFk;kfFk1D$b[.dcC0!V[Uja @^Um튾lS!'.5mٟ* #Ya]ݻ]!3/K$׽K8Q)sSTϝas>߬j>(aм<] tnS)wiT3*ͮ.&7n2?0TTz#ŷU1« L<Q-ÚOX][dp$)[, !0\pa}ۻTYz,DЍ iF~z{H' D~OOc4k4>!mTLMU^v"W η ǰ&U$DW|"sm: :ް'/nz?_2|U3-2Ӡч֒X ުU]f@揄QXjņzzėKM17qP;:Ȉmgz)SY 8~NH:'SU-Z_}C^i4+H~!mAos[_poV!`˰Qov7{s'jd"\_u 'B9`$_W_-:\VY[Rگܹ_;\RggD_sA9* ϓZY1ٷОF9J5S(݈q|S4a*:Ie(cm\G.l4'I$ugF2Cnafys7Qk'9D@='2rm)J%yr`Nany(6%r)T ΋P*!/k^Ghݸg2 n܇G4\-nj?;…DW-|87odEݾ9? r?,fkUr-bcNKP`/n=J桚aeU"՘)e1yɩCJa1%ʼnbmmWЭ}/q<4aK8t sYQMhU'P"@FT gFh؆i!yP-uB:5&Um%޺a6$O@ړ@ow\wGiADyB|[qȔUZ۽4{1YZʫvFkFlm٘X1J1節DLmdG;5Wv 洧yȧsVc27bkC'#ki;@ԉnmdn A \f;PΈnR7ӌy1ks I^.?oSAjk3W͸r*3;^v?'Ь WnUI>Ŋ&YXG_u7RZw/Z+r;3 tHIjRb>xHEtċ}x!(jf|IkFJK:#0s3y&r#;9:x辊L>(!^oK/_6ZI_" T怹do!%6MM'd(9A2"W[*%o:i%, uI6`HGP2]qA,).y%oR/fZÃlE>N Ga Hivd1I10ڃ{fd%e+59 mL+Ckϼ[ )\7x93LR%B1F Ƿ>٘Uѹq+*f禸7mQ 7q̐ A^K #Gx.ݎ6",B9)ղKk;Zk}wO귤xs7];WTIm? w>l,*oƣO&t41p}q5yz?oU'p +3JZ]̈́Jniy(,Z|wU~KvLv(mB ̿JqfuGS=8BMC~JmC_ԉۀ<|Vѹg0A*@w74X=fmҨNvg2\ǧg|3ϵϮ?^:4gVȑmX Dfں<סB_hHhpi \yaT.)!.T""wO4 A9*e$w^o\]fIM ӊ󼔖 ͸{Y8I(zYB(bm.+d",5m<6x@1s#FrB=PB~I&.-|?np-wo&E(ޥ&]tu(f@3aN_H!JJ:9т&i"t=-a٭ܣ[{ONe87|v`7|v7pBmN6r0!$4N¤>mMVKH j;Rp&˿d'D>J2o qyf3˹gI3Lcc] v*1锧*^Gqa"!s# d)փ2> O ֝H|z;Jҗ`JEP!*8Crd^Qd_TQ\1cP\R2:+(3t&Cˠj"X+"성C٧ɗ %~z-U ?3: nGn>ZBm."ٛ#У/FRQw"re|dbi0 Swg#.mãBMIdZ)XԔ8-`hIï{9c Ź9yC&̧W!7W|(r=@e$[ ߀Y:?};y'4Q(f e?yݏ8oKmdK.=NsCidJmZb1ow/t[ pvpFjV9Yy\w+X}lpnyOnV#* δ؇`#*뚸ع7kx2RUp-d$GҬ:;[kT 40b gJf47oFؿygZ('RpV)U,RRPR*+pm;A~A߳tBKo ʫuCQXxR&g^0^d)'NhqAy-Z5P۳qgoϞn onM \>jd#eVABII ca!1ІIR$E[IQ7!Q"H7#9B/yU]d+AM JRERk{ԙ9d+gA{IiEon5eV^|Rr~e0 ٕ b]LHADQl,$ƒА) V4pG5jY[/ p*e[vwk׋wM Owח˦[ W)ȜvvMvqmO?|9Վonʭ)s~ ]O=-/޼ډ6{czk°nu`Q9_]߄u?S~ uHн:>񳚅{^txz#zKQB҅($փ|ˈ1d"lPJy$z-;W D3kA,D<"jN\6Ɍ^leX4$-R.9stم`M"^L:cS"4*Ğ2Gc8yP){ )X^٤TǢ-G2%Q\Z&-wB,xg% a{a!j_1I>A$i7RjMz.:aȚ5@b@5Fh( ̋DgT5!P)){)@@Up(}&l>GH&I7΃d03A`0Hq1bVW²icPz![kgTJ  )j@?_iCESqRi|=5w4-@dGJ[0kQСɆX'J"`dI`]FIccˢ*RF#G^aWg2=y>|N"kp: bC)Ȭa%VB̡܇I`/cI>WւRY²ztB0cxIV$+cxEzƺ1b\{X3@z l*QY+&2IhK^1X?A'^>ZYmI&Vg$%/FGQxi+X{$jH?Y`A 2l7f}Z)?Q lwVMNxlFD`.5ozFODNUfQjѺi ު-j4n`tnCi{?5vG h)^vGw8{ChHzSfJ|Nιr8)MODt:ُspP6sc8ǁ4U_}9Og&/m/qݝ-%^|}7͇myeIbx?xm4$r޲jrqA6F(|OOzP;w ]]r>TמW^?1gV=TrfN"6u)!-l߾K[:d*w0 Q>u37)&FIY.V3ί>ңmEU;n(Vv۪xbxE B3X^3{ COLAL&%])ռEWI/&/hY 82F^z{\.;p[)ҶRl3wj׊gpEf?j8aWw¬ KhB?ϾIzzqtM.s^o_)SUg1a㑋O>tZaS7q1^C˒ا_נho:ć9˪o У43`򪝋=S穦57烤q3_}SrRmہԯÈ4\;Yŋq4QLUTUݩ2} 0$Zϕm銊Nfޕ;9|_1ca|HM!VtXUh>FD0*ZoS<$/$TU">/.3 /ؑعH)GѡrRBBTNr%~ibȈ-݌7d$bXxy)x͍K3Z,u\KX _>GbEU;n yF;@jHȭVvm! N U"4Q{1ƽT ^< Eg'-Лu$|$ gCUǡmF/NӘ*EUpvW)?g(i2y2h_JG(&#aIDn d QHr5P&sc vAi&JJZ)<', 3EFBi{9P2'A̎)R*JHhZ %TRz(n囡llfwnuրeWeI,Z-]zLVД:;,P֎:^=Du3s~=u)>[J==HH |Ƌ{Ep=$P!,:.ыiL:AQmfp.y EԫڠkN(dhJH`I;٣K ;qȻ67OSyM֪7<y𱐷Dɞ|Dz #F8QW6zޮB 9*I(D@;3H6vNv݂/@ooIX5ID|Fl( $>#$Y!d7#$Pc)%HdD7m+Ʀc 68DT)Y()KS [s[^ݨš<+>@.'nWI6?Ocy? k)BQ'l0ޗ=i<ErQ4,/򪩭"e !Ҳv뮉&ZY,4DԎRO%l / )GjeNA}Y^]}zoI8BhRU=I1çu APNQHo%cțc@xYLZo d^Bh ,>P Լ LW7jqrbPQPʨ[s,QA4,H"59 :2;-`"%})!EySƮnTK.u)4ګrpt$ UHSo3ipTU}7 GY͓Ji+GDAq$m&+621bțND*V@q$n3W7fqlkz6D1+5+63)dƢ=i- JG"gB HR`C `]k0h |"I);tdfN87_ܮYؓp,÷茙-ec{wy/\YzRVK1A.N8L҇RUs;rpἧ`/_:[=dh GLgW+Ԣbɇ1~IW<%~HӜHu6*#NzĺTa umRc괋C-}9/;Gnṷ#jp#tR̟CNݍXjw~Z+Zd@g9f dPd/6+8hڜh|(e,^16:k3zc4rX1{pø;'fӒM Kt`SM;IMrr~9F^{.N=sX|uHxei#k^N$A}8 ͦ1f6zpǪF"9A1>vIȬXFῘv`:wO\%Wo,XC#:G4XBu&:JS>x#ȓQ8Ej2 "cId~?#zꬥv rDPE*~3ȝu[Ne(id,B̘*N grf+^VC8eN~58=裑$;L5_/c~󊵵5ZWoNBIJm$e{?kIR$Th9*#9#6GZB?Ϟ'//>nxx~Ocͫ_7TR ^o=|XH+uqwܶxF|= B9fUhw4;@mޙʕiP\3xX=HB[`Suی^1׭Fϣ;<7|Qm|& b+m$GE`w* Y@=4keyن<쁯>0A.TJWaKV*E0 ơ*9˜L/s_[.&I1# ߆Ԏ)*V0|'4[s_0[s^:@Cy=nnqߒe gLYe-x{2Y`1EPA3& +)eېf7Ehq@x[UvPw8k)F72%?.? !*̖g _ި^AA8e6ކ@1BI31U2yzzt.6'CFi6VٖX1p|Vl3|ĉx7{">\=<'TWpJ񈴟~~Z0SS/vqn ,T=MVʑǓmuO/: qHH3F=Чbp87(&YLmP0AƨGSC#C%) /fq{WY]uLT_:̀CiCφwu]~ڌ8 uyS~݂%ӪJ?V$Ջ˛ϛլ\./?ߤ''EVɢj 5YsǟSб˜zX89) ,1E-[0 5j0)h`NDW MY1:V#+ +tl7.ZK^"kWSE (Nš`x+;U34NCJj'0=egǨ.+o.Օ]'%]RVױQjY'ڲ8lWF]v!ʮ%V x4.p*GdNecפ `c)?J6plð55.6amR[LD=k@TM96_3Q{j Y"I-&eAmzix0*5196-U;z~*#C~ <:0|M];92h~}ɗ4)?Z^n](6#>{y]RKv "˛,76rftg:ڪ%+?Pg9<  ȌjM0r)DwQB!KV)+)\p\$Z":O=EW?`sl䡪8+i!ѝH"Ev_VP* .@Z# # gIE5KbX/?ex@lmǢ+@}#gp iSL,76Mבv}3Nv"4篌~Ƿg$q3>w36$sͷg `i'l`lvU.W{;5}EmOȹV)g6gGh{%Fqnu?X4![g ƛ"˻itKśQqѰ\ gS: C9De|o=/CUX+^;F@o%ٲk8I>=LӯgXU%˫K-iXV?)4'iӼ-{ nÎumk-Lٱ[0 2J^hm-\X.1g"h˾DѧFzT>r+: XJ ؟gA%n }wM2 qneP[HC\(B: EܵM\9ht;#Ėů|?z[<13 1f)FZi3oG:f7O{&1ot|0|.ixnc9M&M]T̚c=+MIm#ZaR8ZSF<e d A>^+ޣk݊7/+ZmBXK 'գ3ylQ`x-F]X4i N-N My0͝1RR( A*%уG#+'c$i NOj5I]x+a;V[ ߙ.I|CjJ"wTXKPLJRp>S)xUhp4Dc|'|OHU*l ㇫Պێkɛc-%)$jZ*pXK-XKi圛IZ(R 27$*YXKb@c|S9ךԙbBxnjH ld"J3m="'PACڐ.r@`7ic2L^)nH ,V"GW$ 1#%o-rO6ݔ>і֡mS"Gt!dO<ۦN|'PX[0y[Z1-܊):ɺQA_bZD1@ۙu<5\}x%P%lvLPUؐB` trD_*Vi4/x[BZs`d%,Y?;Z d]Q_.LY9B3Z2$K. " (F 4DӖTIUAeQ% e;R1bI:iM<9K 7}}\i73E owoFd;=LAޏ f;St9*ojs?H }] Z&&g/3cfoH f@ G)7}ѳ Z֓;ԂGtW~Yť/$dG dIaBa"VLΧ&;mpJ *!MIf0eL cJv0<3J`f4<-jN_E)<尝v^aWfqG+U@:^GNYS.@08lvTEÏ_X6si1:DuGzSa70JqT0[dZz"4Ȋ,(i7kA j^ Ku TʦZxL|}*$u.Kg薎Wैh!/}d!Z,'̭S{ܸ?G-';Zjo٥TqGm| Id.ZqyOJWBx #,Jaeҷժ4ŃI̾0W E 1FT4^m718ƥ~zI+m6hMkGSL92Co. (_o_ӧP1~;ANцiӔ6I c džB^rפ3J顦e6To*_JR1%_XMg '*U(M,[詚s葭[v]aV h3~L 6X((˵cd^Y(.@ZЛ NrX-apQ"%?f]YWDkd=eOj}vdԴzTMҊSOH6LTMcZH5v> 2^4BQ=cjf%tY5Z8ԓkݥ2%XU*D ͯ :VC_{Z5qBMf X!WbÏ=xM,lѶQfhh9u썆za+~LY99o>-z*]~NX+V#`WӬ;YPrsrmV{|1QUa3UdkHc.d$LGЫ9-$rR"7riyCLUr<'26$:^,bijvCsjyWnEBwʧ_ةzwhߢ6,Tx-4[}v ti Y4^%^6s\ 5-8=`j"lX ď!n^E*zW+*{q3)RiHN:0S[u@a6&xTWA:}= yȳNۈ %>R_W;j\ *ar]jT}YC*3 8(T,)XYwpMMВy[[p2LX޺n3,F A>$ ;mc!Yyb-J-/e~.wѦuBi~sl&i2jYӝz 8<[??Ytrh8^6>y4ʞo9kI]t"ՙ_zntz&>2Vh2߻;{Pψ Pj(ٟ]r Sۗ./`g<#Ͱs DNoA&+6=~<ο{fK؉~ tLϻ#.3P?RV>߇ gsgsVk795|0x>|c`&߿'A m%*cF}U:{˭Ȅ'"X|a<(Kc:eIǕ $9l[:Po}ͫGUNs^>@>FopDђa{ʭ"zv9贕 ;NrR$p&~}8^SYW$o\#Ȍ yB$JCh E>|88Br |HN!(VXFDEXDBK.VWy}Wb!II(nlI!%V+Jw&I))$WoBx(d1EG2Er4ātb''zGPNpO8WȀBOC0300LA"Sb{Sܢ@VBUJBIaLS͙"DY`LWcBZc"'0a t$4, tҁ&',6$xJ[Y_󞽻 fdG6#?8!nq;r{z:yޥC.$/N#=x sI]pR+%j0M Bg=4hTe`|p l\#йejF b'Eו'!rɤl`rRʄieUm^T咰PDjTsDil>~Н?|)+ v+Fn s׻Ň[oNeźON[︟F֢FQ.( t\b~g5hvdĀ!OZ0XXa$ɚW:w/HɛNlޟܭ|"ȦID`,Ja^2t2o mD9IN8y>>&2MytŎJ-w.[aE+v /gnzugvC r"6riIGlƷCE~%xA4 x!I 4uhC &iP6?nm04r;jFF[b>%TSLIJ׀,Е@cS0sVˆ8KB8H"a8'$vk $vp#8\ ais\v^p\8]˩o䯟'%?dl^]Y0_W_FjBē -B-ZM$^vAe*9b+|%p"yҊ}(j30`QUb)c"2Hp_t}=s쓁-nV IK1&~Q)>ԗ{bL;"&  Z1Ufr[ &\S0݆Ѡ邶nT.*xV>iZ  սO$TߏGT&^ =1R*X3I/uV)ͥC԰km awIio*O$S;ڀ"X#wnTy?۵WNYZs KfX)K(, VDo(UbVR1UT[#LbYP&k їfI(JwhZש<[9o^yR=+{Rj3TQtѯ6%غ&EE[ѫNJJn3SMdSc WHx##WCPJ66OI.hxУI(t xAhB}OQ 4z`iSMZ sLpesjPDG\#ZI!#4H&`#eʔ e*WSWe:d+4Ba7v-YZ)͛mV\FD0Ncn8DrΚ؆ZPtg.:nJV1ܨgT6Lc !S+dDHia;e>%4o:PK\pIZjzo3aʞ\u)C:Hqb"v'A,V, H QD{r FWJ 'b6˵p֑O+pk-avۺswaI5sʬ2MV XͅElt.Qw8WpJ')%R"M4gАvٻFndW,=V 0OIv9H>,zR8fS0m*(TS "qȦ= fFtǛ+}ȟ+YLCoxgE ʡ)GXh(p }"SwKJ?_P(K *y_-pPyA*jևɃ8/J8 ]lD_S9_\bz&)%DY>8 {cB:ǒrI ܴ\w5eP+qq#ڈr FߣkũFMKn(4c}Z!䥞}4Ɔy;>|}bwZ9&9v=F_ȴ-hz^E( gQaqd4Qj(|ɔREDR9RפRnU@wqbLb;$3u?_[*ȚP }YpY$O ~$IZBzHF^B}eĨ;ZW'J YQ {fe_J Q\y 5K3Lw/N~u_Ũq8`>ke,b #9j̧;5EV4KɾL~+޿NR7dfJ^שUPV͟LۢTTtRQY6Mw<(f]$0\JO+ ڂDž*bVO绵ސ|XDVvfʮA/{a׽aszktz? ŝwhT[*,*#W+ʜdN\{VVSlT<-HAUXl4U+?] 1b'"(rX)CDa$6X00N\ dN- K'j@d\N4>TŴf*ϭL!|<ʑLkj* K"QVV䒼u*wsYVb"TUWB@Ѯo,FX@C@$0ɥ q. iؕ 0jY6۬YʊYʊճbFA(ba X(>n8S4T8_2рΣH[F sMiGEܦa(S۬YZYZճA2G/<*AjEOЭ&BqlIõ:Gi͛AыeP`&M- _Y_Y=F(ɍ5 $@hPyNd[O@E*@P*piGE \ywrv }S홉Y:31Kg&fĬ~fb WT:) A&(iTFHsQ`* vTf ;JKNc6SȐCDsҧGh:)CC=,{(; hңk\T M|B`JFRGs:p:2+l+켳 `ޞk(D2'Pҽ0`D0ƊT)"-혊#CK90i($p#*t$!#w1P#^@hdTeQFDnJ6[|\mVUEWUtV8 J3.PX:$NdAkCFGˀSl+p%ItT]sΫf݈kdG$cKLT:R"QD,Hż\X/q0:E0[wQ˩1ss\6MpBpfra@z…a~ \_b!jD Ҩm j u ܅2BuD;4zJ #H(NFzM"HC FW$HˠdT(KCEP"=c7Dxun,yjDa54Kjak舦|֓pla57oeiDL!_~~Uk\?L{ &N뙨9K&+ՃE~J !o?7=ۧZR|?kfdʻjD&0{(r|&*`Z8 , 쉃 3EE + &'\LI(U KE3xđT"m$8Rh**i: ǥ B>`2i r1 yn_9ަ{ T \UܧOPٴҔqH8Eb'p _-'W@_5J"PH8Ҝj27I7Q"ffItSgQY:. Wh 8O# Gu'$ \(@43+G@$-yDq3fzUB2WP> u֮V#%W轂NiӾrqЬ@{RmV>4L(էHӦ^Ip gZ}0{"?@}P9 ~= &6JJmz(P{ r [+#*:)Mʯ"c؜þ_" GN\}z$o´>=B##%z_ҤF8@X36x=jt󽋘HW(_qܣGh4:d%)xN CnlҤmףIRO H$5=Tu$tHMzZ넩k$hj€^H%DAy:%=jR]й&Z@ ٠M{|@#1|N:Fƣ hB$e{k#q=»BJ k!T7>ˏvS=LPF5іd%b:/)cBtkOͥPゑiҭt2 0lk?bQ@=gTxBE3uT9_If hǧnڼVMiݭQ6|}˰,?{Ww$ǹ ̡sh HJG^kv!֝=Ԉ7,CL_j-j ow亮34ocEά/f_zV_?Rbזo&U{.o*C-̈́5jfE3H$Yy c 3%QAOJJS@R3X!\eƙ茰xj!zQ(?\-8O^ ~ƕqtf>?-~AxPG}PW/@С>_q 9PF~aRaF{H!ׇ m/jݸp7BU/ eāޯ~hxތFD7-\  _+3p`1_,W1 '?t jm>7~=zdEzFWo* WJM7/B|K `s)͠բ"hNV ]).ԏHby՛ޫeMSя:57G9C׻60Yn*@95v5nZ?dgRAmWqz;5JezbJ 8k/F,}]> BB%ju w.=?ߏ m$xBGE$'K)ۮ"EXU>jxZ{61Y;Y5Y%vp]?Z%g[nWz/g‘3ak k K4>'4t[e%6UJUDP6(OJWڡ=!Vơ2`U+Q_ԮgAv0P C'2tj&y yt UIzsx`% =)4z^Ke9#+ާŝ_\Mֿ^=JW_AWrq*7j3r.nn;򇔐qqk Bx fWKO{ {RKA(j7|: 3iQ޴?M仟9d|a3jyͿA~0͗ӥ_|Z-soz @k 'tNs^:y Щ> 'thLGZѬw>Yj)6r<,m;9Ƥ zg;_⫻S]oWqOodZ=ߠ_C3ɷn曏`A زD֎.N"j|"pI|2 E:#U|vBH49"$̐un=&zV㓞 ->F9o=?$JU',4[֣S8DV#J+-M aA@Imsu rv"hZ!_&3m84s|p0 9Xa0$G 0 QCAgԍ&cT0?=;}cn>=yj*Np%'͖yK)" ޙ|vC' Q*!95mc y3j5/Q`'hNn6'zHi's:R#g5>H)P$숎a%v0,rNVV!~Ƨ՜ԓ-v_Fc֣ s0Ə-٧\ݹhD7G_tc|Jai9J^`i9_%, ']眮蜯׀k^ ϻ6wrʩLx<uȳB~8p~o㔵 :^ ׹o2 }'eS^gNczÝ#Dy vG  3jw8p@1҉|pԲpEp=b*l'j=0IPr#s?}q hқd$U &JUE8s4Yv[{BBp;m"<)_ ˧J߮I"@5g7)B64΍nq8$giƌ&SXc7);u9aaR*kg7#eݴ1$J6er.VFʈdV EZm k6HG*ou RTq3(*~I֥VzMr[5S KB1ys8RD޵$E[P#@>M0̗$v$+e߯Z-JĦ/jL&]j%J5~iDUh8s:ɜ.'wk9[r'Zd cƊa3C2F3&lIDB5#6S'H""0*S/&H'pԅ' ˦O~-aB#WjuWtEQ ݳB?<[3̈Fr+jMrSft:Zc1”r}{4{OcE%(ej_7)VcTXÓͿF@WoFյ7Χľ55ApsT1坁$b2@~fn.[Di f)d&eL*:!dy~fM@ioތݼ$1Ngv6_$0m&MPS/ `'Sg4Jut$UXIƠ݈Z&5xЄ)-(ub5&`u'= & d^שO[Vye:p?GviNfw% ;?ۥ-Q]-~PĤRoИn/'|-{7uP x?} TWa\~Pv_톼0'RJ *T^l".]@T~D=RjW. R$T!D~:[ ե^ z i֍n` BitKA Q*IءPC+A%pF!\:Jkh~7_B&"J >B 7mً7ǂi4B`sڠ f %v۟-i^iVS5c>D!9_u@3GOO:uxӇw5;UV%ff &/Sxrr4dmϿk*#-S :#8<3 yǸX/*#YHA7[h*Ӂo?Pw)OwQ7H=477tOI͍z`Hk.7g$uߦɺ?{Ŭk2V|lN}Y:j[pW%_c|BT EC%D.QaU [P~2WqP0/>i suUsY"zp5eopZ9 %b@R9u6F↔߭y]p|`:,PԽc_nłͰ"zFwo7^N8[+`\ކqNrT.K7D cB!8!1}MbYM9'`L]qx?dWGwhif10;{JteLDn2M+)Ar,P5sko=)$Ηu`cЄXrSeʜ0AqbX3h"eI X;8)8135d5U kLj*zؔMU8{uYZlMH[.d:e>zg$|, x(\tia7-kSN>ۗCѧ!pm $S/^7lTh}wG~:f89rj怗890D =Nyd.Dq!K0gY KY ;1 yf!Y qj6"?,7lL$Vm?.ngQ:Y8Dzw4lH4=@V[|E4Y,o|`ߢUfbDri3"Ad1dU3c"ja3CqZAz 2pH$٥6 %Y 0i 旂i `~:,셋jH&B3$x"< PHaJC  bj{̩65 WJAA~CnPDV] ɨٵDձwcvNy0CLhgUsV'P'퐰M)wzwӝ/h}Dw.IQv\շBta&NOtV.l`zWA f ̨nS6o\tA~{R\ѷm`H:ws0Byt&s᛾ncq}E;>!{{ݟA3&"C: DHKvNTizVrW levqeAo:!=|9 z>inhzT)9~T>ܽZ:p%E%0CUhDchk@we |1oWeq qUƐptqUpt|ׯUv0QXc"C>N7O7>N7dDmU;n^oF/ ϛfOU͛.ǣ?&[8*SўS*|5|Emmu{a+B an;O-x5tpu53!]ҳq+=Lj(IS{tWLx,OP$Ml8IJ) ¸.C\2_dAKCb*HHcRu}[ tB )vN]9lI8_mw88ܕ_&)&32rfĦZE9 |\ @yUg?tSsߟ嗓 @NϿ6Mʐq %,9} ݙ:9*KOfF]%v=5u[p6/~jŸO,\9$Y\pBMnfgȉ>Z0Vy>z?%z<V(f_9 nas0BHM DQ]̻@_/8@aPM .{7Rڧ/ߡ}jZLET(g%lAY=j3! : 7J<~`iP[.8$kps6&C2zs]p~~U3ET֩iEQGrڷ-\#C[Xc*g;Cp=2/k=>Wy~?{vg@;N)MYsK_r ae7ej؛6|9W@$Kzp)(Iki>'6ŏwX/܎|ܱj95'%zR`*gn|;"=Sk@hGώ̓#W߲f oz:7v^׫7"^#0~ILj!xJ1gU6JcUOA H[b=Q[y L@ٺ^]bjSv %/Ѳ0S %4vf!f8ɘT:y{u zUs5gj/HK8(qͿĮNL`Qٛ\ 0zsZJFv5} r9n"F5&=طUMcuTr]*޴ZgB=j=:5A=Ws{X/XqWpE᧪ǠEisGˎ~ovꛮYjQJ"w_gDf*PM S!-VXe(q"1M,ɰ6)*36atj~_u RjٰDNBh4_vY0Bk'ЈB3Ƒ҅]aSw=O=Hg wH7U/0],nf=v E*qdl%#đˡ#Rb5byȪbXaɥQ TK,΄*ь3 P ,O*0p9wFB@bƃ@OL:4ɐ~}} k.Η\Mԯ.m6{]xuA^YmMV4 g$^R6eҝG['r XG".bºERY3{ i@@i~@6s]`jPQ`(cRb1f+.HfjY!-91뷉(Lp.QHD }r(|(n7*A0^UW&}4~MfKX<_<$횼ۥt(JVbpN=0e F0`@1ppe0ԭ(i2)rsHJzNiyz{ľh7,ߦd  J0# Y,(IZ0 YH^(MԔM&&(5MU3[؛/=˿˹f?7@/鑹w\@K8>NR*:G,>_&֍zy}zbeQ1/"~N'#S-8Y~g:~f^M9"RP^Y35T,17cjxnIVQ|G$Z\G6OtiXCBxy2"(ZwAyReOuD-]<[Ȅ<x߉R;&M~URضOݤ(['Gb!Nr%эcXr?dҞtc-z\GwiѦEkA0^W;^ -2z },66n e"C-ql2]B8]Aub8$@|VË-}Y^ qnLOӋo3 Af b=[٥\E]"ĮmE.+ ̌T-j"3")}ØT${p%$'%Z%"@޵B%\BdUSf$ag2{i)=6Y &baclŝZ$h! <=aoIZ];.۔2t*]AaTtcauEHg򠯧*vE<)KkRdOd"$rKNr ZzjX)d}CA`]k Ov)X뢉Vs=qNɻ&gX[l;:ug/˯?\eΰ(#بL0i3'0j`?L xFL@Qs%C{f:ҋsP:C; "fHgCxWb2@R5ۅf•_I$#?h$XL_($ȅĵpD,>?FDb{)3(4<  ڡocۺ78۹@N$Zd6QEloRBFHJlK8Aw#{lrH _.MD&ʈTzi#ͨCIxOm vdg h"0bߧdS6ͤ ~0R- .t&uIĩPQm`Joа=D3vWn(@ojҷ\(݆BebVL3 [ܠV f` 'M.MÉkj+2gQ<ӑ9yS# =gϷ>1zOkw\;ۚH *vs@DdҞ,ށ=u6! ΍C@9}F$u!vaa_;^]:Hh'nZsyM>5\y/J/!z̘FƗLO>԰%߅EKa rurq4l^'8)3jgp/U0*7ރ;WG՛b{} R .);( @6zټj%EXΧjBL~qte#!1dlvNEb7^RK?$QBtynPGW6>27 &@@ܳZ& fnT k2 *LTҔo(]QӭmUP!MaHu8Ř}UӨ|x埂ZSlJ`HpB'B7,M +> Z4 vp rA@~'wk.g2G;5[& {N;ޓSaj 39%]V˒A;eD"ϳ9>hkVU d$ ^yN8(]QvFܤzk)pؑVPOvqෟ] /ढ़=d /9}m30Df+Ia09/^q[L>Kcߵ7Μt;Y81?fAp@qyJT>\^ʍ}Ն~w(~OMYgJ .d m& zr/L%2џ Z;U#{kg_g_^0Zq}Bb:ꟙ'5FŒd,r|a#U&J ,xд`Tx.sK67Lj9qg8>nGNٗ-@'*o#DZ۶7n]嘣Vi\Xg/Nd~8>N?InGϿ?ӏ{k'Iu4*G ~F?O}F/!^=]UUp`sƥfspA}ޜOfdeB@NurA&Z)B+˩ !D9n^i5]3sʙCdik2D$,HY^Hc\<uoȵ@d$'6#8%̸XHyќAKv5S7W}J%>C~w|BNJBH|;A8Anw 5>Cr{NH r(!vFZ[VzCDۚ!5"GʵO9?XxYɌfC2YZ`0jQ9uC% gt"B+xZJ(LMýfv>;0@1@Fobv]pGyԥ^d uo0 B ֔ڜA"5adln YX-iP5-!"^;5= v",^|Tԙ í@x"\Ӽ"%x%8Btb.'v2ϪLU/(|k㆙\3 *(ټBLr㐁e} ǃGO#{vOD+?s::A9s"-@bF!Y F0`%`\Y. Śq̝wyj4@ Td).dz|wPCAlEy?=L|ɔ̗L|ɔZ2OGL(~2o25-2rp IΕqkj>vPrJ!sXDP܎&r>|xWGG'u>zmɚ_Om|1Z4|%cL#R-Xf,C2׭ז)bYJ8!Dji$J! :^nd7˸yZVh2]˫WxM < MցثݱWcU[}yLªU:p-\v4\)WO f]x{0 x؎;_ D<)l.JGE.}) QC (bMI)x=uե[+  thElJj4P2+++|dQBJ1!p V<:Qiֵ#AY ВI?X*.}2ITBKa4VM ^8f ls(DmXшbG#/\v"$`=7b-PLF;=U(qт(D@0¥-{ "CE4*t&zҐ $Y짽@!%:Pk J%^Q~`mf{a)LБ"">aL;"٘@wh$oV{pAႁ ,,%4Գ&ZIt`!1Gco|@Q"LcWH+$$G+w.3ISWq(Ⱦ g}q:*H4'?w>P6c62i6i]{A1U̬`6_Ig\x ;kuaAp`4canubP>µ]N%8r<6!7dZf!Ź<WVqbB.5<,/<ҼF.g ςu4k iR<1҅-zoyKXֶo6z&s+MQbit7 ՠ Of4. ,W8T"^fo}˜&w]?tԠjsoafv 2_C0nܰrJݛ>ufԺx/*;Lf7ۋe{MUt\%񆑖 fs>8v |lyf *gTwn2Wpe 9t8,gЈ<ۛͭ/Y2|UǪap`}A^N^~8?ZK^/aQV]̂bEeng.D*lteT}}*oYI>fv\).u*6㏟bbT97׵"c_oɑ(O1z6XU. [,&܈+ڤ_@i 4Jc:UpDMTԽ>:bKVLXe*02l*?ԫ^Vћ[6h@GtDg{ٞٞDԣ79YDXŸE*-R1nqz1ny[3OGln0휬ۮ|n-;?WWE,pf0W A,,!1Txc{5~|; v} " ^1J]e?lb|7w|e'O.!O2280TF9~>g ɰgU7}DqV%'[v YZ3 0YIU6›MQsz YB +:5) l2S &0v FA5ݞ.7oohh㖛2C`3uګaͰ(][vQyK~"H,E!e41L'戔ט_sZ~Ry >x\u"D- #ZeX N%Yn2v,7r͒7<@V۾cUK4Iǵ|5zD@! #6N>%d|Tq/yIa%܃ ԋW5Q _,LȱrjJ _ 6p;lۨäZ"ܤ+߿뇟5L.=CK?")ӻ`wݗɪ-/Ԗj0No϶;KM,JRhUGej>XͅTrlmƯ=0к0[L4*~f 7k~e\gܪg3J4ccTbD^Y !H$)̌<"Xo+dTz8\4vx9B z݆A<1ÎCK'c"f Î O&m tV4\݉EQBJL#Ű)IfKw"X8H@ .= LT.hhꄮؤ[ tVx$UQK20#.5zpP>S'| a iW[0Aaz<: B9Ah&zˁO E?[uoU #*('D#+ PIʨݫs[<ƎO+ GMH8[O\,RͳDq,$c!> (^G[ټؤ3<5׻<f!^/vHY,ZκR剙=jO J %%mGAΰBz7d7U)?xy%>Dj&4:1 x>té09T.*3Q|%DoT95Au$yf|8ΰgwSB$ϵN1np(:-5DGwZ13$Im¼۪}lK+b6pcQ \ MLH=3+`2DIJdkMSIɭF Ja I8Q ~'(Pl1ȈcX -7UU'E*dD5#*!iܻŋ\faqdgR< ɷj!.,kPUFE ̆OHՋv+!IS51 /Ǎ\ۇ1 m$Enwhl<𽜨i){87.\[0刡XG-Q|2mfɅ|gדesSR eٵST>EMUoVGhN =̆uSK9uŠ>u;^-ɔ[~ucCօ|*S'mZ3οOc5]^ׄgBNJ>Vh:?mдoHX68$h{Al-xɋC}j,s.j[4 A&?Jk6Sy  V! u LKS.YϓYftt~ǿU?~̝oҟS:B3N+???y%^"/X 'm\0bat2f3Ӗ{CArMuV>[CJR?+q*ĕgu;냔nBe[Ѻ67 Qڙ#Ë"{JӲ;j*yӛUJOz!^t]P?O%wbOMcS(T["Lp.Jrbmwjʃ1mI+*0{%x?Td23M.$SW $@[7r~ J)ٲA!jXl| 4FX`ڄ .K EMOUBkDiL(pd@:$)Źw-h2vSAWOPYDHvI"%LqlF|Z$kQⰏ7ׇ ׽Jwְ.҆N1LLC| uP-XJ$IuFTR Ѕ2 d-BCeIacWy VNs)d3dgOʹ@@vO/ n"IX 4Qi 3D"BL ua|a v ,fS78ӊUn+VW=$#8 ]Є\օ5M Q i A߱Eh1Oh &XvFsXIV#SACxQTu/[]T2e{KA G!C~.:-DGPlVbyƭR]Sl_)~ڢBpY>hֺu--F]dE%4*'E-ճ+!Z JhHbYgBB@:T,xu:p*W]zIզʨPaj7JV+b iyzԪz<KQݩiX^=F!|{+9D6îWqJ,_2eePyI[ݴeEYҿ*%WEueLI9 K[Heh\FiDX]-gRB;ɼT)~+[  kLvD-(SG\4bH+"Q,E"R5k*\:i]shrW]%RC?vIwyB.S*]>N>}5rPADҐ]& TvG'!$b945yU)) ϥU.(Ԓx' QP*B}m;Qq[<;4-K 1>jDguFMFTgXm_ HKDx^(/l'lX[)oQErZ J:+[p4%3%%6)B I, CR*Ǥ\& C߱@ y<ވ!V<.Ê= *f؇dP+M`oկ+.R> ]T53Dfru t3NQ*$c,f8i./fΏk]=!%0Alx.t7~d}Gqqdϯ+8͑qJDBRr iJ!i_ [ %JbǔL{ӎ|2'Q^ z>21z^Nr`>m߳7=u~BP6o`T. .8#Ka β|e6-cϳ3}oąMUXFƩ);W2RvUWB(&'b 1h? $㪚Hp'0ܽU#0cd#P1Ȥ4WRRc1Ȕ kS(i9W cX R#ir. 8꺕, u.i6{GD~֫>^\-ӕ\5<6~a*V5Y]j- `="!  +Q|,Ab2cP'.e  s]Ė =v@?"LƐNs kK2> {t竐CVB3C3ZX;^Ƨ %N ؃7OTRC1Y(lֺu-F-de4JդSsn_πҮu \$HVk6kxEVgu08ۚ`&+B*RFD1m*NrRM6\p_FoH`\MFQvVі)>ehT7>h vc5K2.vK4B^u,+"2,lzf(:!q1dRs) *n^$ eܗkzQюYReF(R]r"w&8EŽ}d ISA BwF1=;o } wBTzhCNp{<SMo#헴|4+)DɮT}l;J]~?;Cc[k fHue"$jX}WCpgU9p^2  =_*ڮXc(i۟m/̡t?7M#j_[ eI%#ޒi^3_7XK^{M9U_NoGM6ہl|=ߏ 8}?dszGΉ/ShE!fOl|"? 'V?.\-:uMG_j2~s:Nƣgo/ j'/_=ū!Apb_ |8ʯ6:ׯ޺OÉ pzKPwu[Dn_aڟ~r LѻuѷqIꡛ^ޏs3?NWYGCAUu\QQ>iϧ}o3_ R3\?oO n+2ο/Iο}8*nq̷uXk4FrI(_ÜGūasC_SUgC*|O7W~|i^xj1>>uf4==^dO#;gKGM`y)<~}u˘V!y9L]-7xC?ަOo܏u/o<}1߲;? <f]"bP|w?K3'˯/N< ?}?O)?AYnϳČfsn}~_&ˋ0P|z~a%+Ng.}zqtƳ) ʹ_wd!Z_:ޡF~6Cd~nfoˋ.ϐ>AV#yi(~Ѽ]NMM=5hA8dY<7ԙ#?"D ys%Q,{в^0ppx3g0%K$&-QLq{ٜ*u H]hA`_se>.,on=Dm!-ełM,J/)/ Y\EC~TFk~>`)PTljEez .ﯿ~Y^|}|xNjǷƗv ,{;KWv 힖^ODX^*> R~ǣ`籝5 _;d)2db8>6vϷB,w _?okGSw/Sh _6Ooӻ1|7=rc]ļnG2Ǐ&|wyoϻm[En@[tqtmEes5답oD˰ !2#X`=P񧅷\̒VR>҅m(04FP4F.y^]:+Wyڱ7Gy==/}c|ϥotNMO5\L%WʉfALDHxJ ?OI=q؉\36?<ʕ:wJJ2xlܤcJ\BF!znr3ut9eƷ[]p+nk:]_ ߿{o_G37KCd6Te <GBdڱ</T`!yZ 2?ܦ&.H>Јlyb 1" -~,wG\DZ{9)4N޶eS2C~dmk~޷ZY0nw7yB3Ք_/ *N2:"3# YB)B1ò6~i~-:8cA+1/hf( J픦H bC쐗b]>6p5FwM@xub.cFKV!7p-f+}<s^*hM5Ԅ}J.7՘d:p98T}Ž*J*) {⅗.GB&"+b(nBpDA9L>p{U p ;n/ƒ^e0c}p!Bی`BK VRFu (Pl=śԹ43JQBp <`x a&60R1p͢? ] H GB-C0ŒhG( SsB[GqZ\[ $dX:x hcv[FV1Iɶ!g3>Cl[_:ض/z6z[df1pf~?Ht/xHDrt_~V+ԧ¯5VXzsyqU﹤X){05GXV{5&^h_*լlV0-&K•T|΁EUz?oMiӍWq{6[l4Rxqy;4F>\]6|i %V-2(8^*d+D$$ass¬ 9`"Fpq07J r!#ag4Oi{rz> KSo2!cL+2Ql'<>x9b!Wy)+XQ"a)} ՓovGK0 @ z; :h!w km ,PWSPiHRtuITnnw1jlx>+zV)?SiD`1&XO5f% +:1_-X>O :D$w0VNrԩST=#@aT4nx"ћD"  Sg/DHHBof l?T#kס\58T))O+hpO؇@઀[h GϞ5JSbyQBk5r< ۓ]BjHO3 xt((ݹT; u"z,$_d'r-őMhxej4IżMK \ah4!Crq&#caIe1h,z}%!&#vv@G, PX]/5B ln&NͱL9oϜOYbFDŽiRSEVJ5~RT%J2u3ʅ4 "# @%˖]~G ?"uQd%L#ˢ&F{O@ۻu'8B/.:]NGx561q,Y?6uDR$l9SkBEpyAiL8PqF@scsemD@Fr/%ֱx"O5"=o4p)3sKR#[cP. |)1*m|/ЬI/:%@bX{{DŽQW:eqKTJH+^Ϣ F%S7#Y#Dn_JVv0t/H%~21Frk߶uGwB{v4 kʺFnLzomU-Even &.=M١]. 3i_㟫ҷ4j7L3%›gUN頄tZHkҠ@P\W^0v}˒l^g"v=il%9!l$p%(9=)q>Aeqm$gCd͇׷wc)')Ѹ7@C\fVeּ㔧3#r0?]ύJCp~{h?a_{ECBCFODMw%iWatb,\?ݦ#0m%C|#5ܒ"O[!yX{/1z xe ' < q>Z[1X'7vdn[&s#Xͭpsf@XSK8 (F$ "< :ċV9C8a`CjmgZ \Y|Xwo|gI]Y] . UUۚIeR*i#>^`]io^g]dVT3u\mZ] 4YBY<:J)Yk7y Ø4APZyl5XBB6b^R֔ IHT̡&XM>}| r)&D\PLkBRb=&8CF\$cM0QKRp`U#@Hr,bRlKboei~>xb<& *5u\qyi>"A1Р>2(|G!/( Gi?SOn2t\!C>5K|/CLV0 Q/&-ˈ;,o޿ KtW_ҩ],b{fi;N.CɉQ:- -Fo)(e9/Ï1񅕛ίێ#iʬW fy+g0𩌤z$֕ u w3Y[3{U c>Ro]_O$x]Kdi2DʆYrق>ȎIJ'ϵ_˧-IZJܽZ`)TR2X6DEW=̈́dY"mI]5LP&1!}ѪgR ? S&uvn((VDk{%-95;D7/{5̎.d~D!*dgs;\# ,e< .5b$LY.39wD[Ppd} "WJ*"Z  '+l@Z02=,Gf"8.$:\<~}%3"7$kt$1CYO@~%q^~%_Vx k&IZ3g[4q٥ؤ~<BC#jǡ843G]|LcK]I 'ͮ_&b/93X~I:Gm<Đq\x@O"RfP$IΤ$R34 mDbxAcUɋ4MEN%C"J{txYO@/࿜ &Qt2stsїr׈޵6n$"e{W%3 v,uƖ}$y.?ՒǦ,"ERIEUЖCDb6^`눆~nPN;k[ti`ߒBXi[ !m96$B9*;k]E>m*fcY׻eL{ss%΂3h޾a-*7*W-I]--/[~*eX/I횀,W߫"=B0ѓP(BEbTPC^M:pJDYP/.=x5Ѷc[0'uD #}4Պ_+`UԇǏ'p51ϻCiP8t6QqtRT=zm *-X!#H` 6TT*BqINiS.*sA@~mEx6prΝyU^IV[sv&I8Wb8Rz *0իO.Ԝs*R7 bիj ]." a\*N-goTb JJZiQtu~ȷ+oXæ ~nk媺^ 58\ȩ0vAv!kzVklCzȮ&uXНwmnc*r;'s/JJ4!V:;Ȯ:;WP wݏPwm-TizSp6]6;n`w &8tбrg|B J`J%!%! BW8 $rMލ)II{ObNs 0e*gB]sF{RYe"*OrQ[^BRJS!JN6~/B\fqTTٚXϸ:¼Bg8bŢrʘ:$UKKoΗ9'Zj8WdByOއb|%ǥl&2mv;~qքi;NGZ~0vݠtMҬ0/B:k.d%ʣOOصQi{J^ME# GJRxT!JFB6DC|ݿVk̲R|cW&;dR#{#([>=E6֔[MJ-u|$qEWU}JVX[WNꐋUp/IMIKǙ ʤ|Mu%95+aۋڬ2bbsX@`C7@ $#џf|!w^ȭ+Pr%ZsXf>2d뜱֚g(U]>bX}99Zj҂Ɯ8M(U>Gބ{`BSI('vxZġ%9Ё퇗ɍ_? v) Z1+pևS`Dw)^^MD\v\i3f B^/ʤZvi!6&N|/bu=rXu ~Dz*RZ8bZ HRMvIq[_gc@"Vy%%H4Po&.d۠98J%%QϛփNbRp^6٬1 ,;Nkt$8]*BLRkt/OQmp݄l1v BD̹1 TG $C,M$!L 3TW$-Ib, i\a^N67=;5aTge+g?~ >K)ػ>?+]0.&#o..TœՃ?g/? p<>ﯯg)-f y3k;m8NޏFx}plyh8yIşrnu >۩Mp\qSTw~0v:r^ZV !躡x:Z,L :hsYw<9'M1 cQ%[g(-.*zn㚞9 ?nߐ̝>?4z㯅ñZJetz+^+> ѱd)uR2ɈNtI t5ܲl8>gT!LIQȽ۾>Lqޣq4O3ӫXzj0 V\}21ߪ\ *e rY R`EMSdRg*HW< ?QBd˘wC z(FSXҊb*KSZr";p rFōa:3pe<&y;7nzInrh'GW^O15*uɫ8MےC^LE`N=m.Uj#9A/.UWA/@"qFY!N F#< }WKx@iҶz=C-2[mM]歩˼5u.[Se byH4x)¤)s8 vua'Kƺ(+-MligU] _ӕ|SZ:Y2"9 ܃7;Qk^ht0i },f0 Rqh&;k-b ?zzq%dz~z_Ձtxo:uatJup*lzx_Jlot@)HX+&W͒YGYwK7]ظh=J}w8˹ dtyGG8J9J1&x&]a$ E2J t:5\R>SmN/kqK@m%Z^8ju\Dݖ23Td Nu1ym򽏞EUwz m%<PEUL'Si_C!#kN$S=AqO|ԊypNVttֵ7 kRA;+[x]M-`@wH]MVsнmМCzqaF *c06½VCk[%~A]ɡ;ـA>^Qv~NiнP` wɢ &p9h\"RDVӥTZNר5SKfȓ[2@5UH/5(D[ňSLi|eÈk@,dR(7\(UКcZ]xyZUX@̤J@@"VΗjs\aR+,K8[D/`lX֌=.x=G%UDg؀Q=zq|~SqNI7=ZpnL7)!OY>/ 1TPe|cnB"0 $r`d>D\䄂˅D73iAY T'Et_>\h4:g<ꂃ>= ?ի!UHrKzVF-1,(kG՗燐yqnW,cshJ0!^_ǥ[?;SeBK7 2p\uLDÇk#S8'Dr@8&ภ~]8* \t1K!&kB5;sէB T/dNK7gъs\m CDp)a R!.cmVDgGi^Ը ۇ%a|0K<ͨMkrFyX|:ʽ4d)6eQiVf:ikʗJ[+.nTm_P50 nipRVrCU jSJ y!C. mTz]Y<Al([ ԍDiϟ.9S+s]8|}a8zf*ga{}Wn(.666z!^в;B" qVoclW&5ponŭ쫮P_%KVZIr\~/3|xJl>!c旯㝳=HZ4Uג첝 OQQ5F{&oL[^<ѝMh;͝axpTM8ӖV}"0JỤ,.Ix:/D>>k2|ogn<mY|~xmzrAvkm8QT"Syu .+5*YI%6 {ol) PR#X 2 p! @+C*cK0fāK5DLi#* 700Rմ%-A?^p^A 6le+o0ãCLt˨ -mNEҁÍVT֖q`Ayim%x`-/ F٫ Je(n3g%fT.^`4eb\ "iafm$H!0rF%M/Z`7֬)w9ڴ0ZŽ)[]}T W^{`|UBt?/@) fk\^CWa+Ɨ쒋gߩ UJld'%Ak\e1w'c˵; ƝX jU3u"n!scڏ?LμYk6㻡&|>^/f Wk> )X{/K/ful~<)uYW!uYWht܁EKz]~_J}V!ک[w;11ݯߍ#Z Z k3CG {ff|:FR xIV:Βn\o{=r6\E翿{X\O[)*M;i;z̗ZϻAP1 3dm 5,e+ݠt|ĆA% G?T`Rjڻ&e~.9Ԇ匌K҈'XPHf+}ZTO19EO%1W+{| |u oGBS]w3|DI]G=v]50  z9^>>20SHc6gBLRuTˆ2$RUV8 pt9B*ja+ae!Z;Έ͝qq@8^Yc+/*a*gW>VNy=# C k` |:+tJQ9cT8%e Y]ʐ%J!ו#F͇A% t4㐺򣻝=gS@ZV+ZN0NM5BY14?ՀRʧk850W:"K0,["d}C.>k׫ʪwOMk37MTv8 ݾd ~,itwly45KɣhsDl9e gS7AlGaCwXPs+&`A1&t7/~} BFA"9ԦŮܠ Œ傧zz[-}cs)uC#d8橑J'Pldӂd=qfWܛ?okhvK#igۋⅺw)=JAesޟRűl)N#nߏf2.J֘_W|vK/_㩘k*$BRN \|)Q\-!1˄<3Q7fNq~B&WD's FnRp8aB{F!P8YC#N9qBf:qHEK(j 8Cc?5ԁBxH nC-U|Vfeky] ƛUf#z(/kjh^~oΤ>P<5g+ HYۂoK٠Q.RܷKKJp"|贠-kyI*$ջiN^9uښSwYTˏ'28rcJ5xNiҕҔHEp(pKd8.nQgwF-= C+$j];CqtH]䠈CIU\+ Gx,=Bjv7 9dse l9 *(J\V&!*%#j$3 JAZYsr)KKhJ2HʡFwo3jkDAu oUNQ߲Vs-Z: %{$0L0 >u$0Np,@jaA'=8a4 -/, Ym?%*"TQI]% ) AI*nFI, -h#U-eWoWvvWj[H¥Wެ^\z3Ms{urƼ{| ihſ?Q6k-ߘ4oȚ{0ŭ,f}- zYLv'h*:Hj8nN["Or&_?ai]kC$Y[# [ qu0!`)tabPʾgGaIC0|J\8f'ZJL1ff]i|:V;m f6X"XKqqD@]`tVn\r:؁ȎO CwtqE!Tczua&I&{ @닗J"SHR6z e a.Ž C&v-{륒 B Sq`e1e]{6^iWۅ9~ʐz(^Ү{&fNIP%WJ&V/1y?/?y1ٶ,C|Ɵlg*+(RGgw93gSjorn v?@ו'i#{LJeJ-3l Iʵ>3>sD0 답 ͱ],Ogj9.)Q?e|ؘgq٣ö$AJ3咹7HH3Hui-]U(S႐ [i[rkW'uBPc']KE -@(+Ia<$UUٶZ;۷Ƃ6IT=_AgW/Po<櫇6ӱx~)oN{4v&<n\!fC~!]ɩs鮛|8-}RNjzAxŏo*{IqewLnɷO$JmI0f0$Ҷ3x yѷC+3 {gupm73uDoNeN9y>]<=3 ƼvڙMpLNCEM\~opvcÛBDx-KQ ÉP rp!Ne91Sy"KПD>|U>}&SP6濻bYIQH 8]Ba滋wn>&O~1 w_ʵ2~> PlJC4Gۍ/E|{d%Isαs4˥⼵@7%ktݡ( f? nP37R\xK>kXS*8=oF)>~?<t 2A<_b0=J˛9s1O ZC+9..e5URٍt%?@S[gf+܊*,2osteK@0EU}_wd ',@0de\(+8h'6j)S%`m1ř}y' @wLbIL)&3k^k@işBb|VCqB`RH;$Eh{3"k*+Z*F-q}{vTZyQ;OwZ"ӷ"C p }K 6V,)JD*S /gBM+rtxX^{^ )hG]dC[2'91>m/ي;&_[*b)k3xJ3ڔQ:GS9,CmLd!:;GtRЁ03$,4S9j3~^ ɑ;lL #/8ۦ@Ȗ B1MnF ȂFD׋ L/וowweI|Δ!tx򔸦(,;$ECYGmXEd\Ѧe8{(t1,zf]Z"Wgo&l68~M`5_OfO6aՌ(llF1xhg'ƥ>#gYY 01ğkp)>,0+wÎi}格 A*ƃ,QTzb Sҏ/Wx*\j@Ul1 ^k^Adak%Tç+Pٔ H v [lHl~**:k/ L xqZ??~.Q/\|o/K ]Qz1t: e@{GNP#p@ w"##"FGN¢Qj\jp[:f9-!r0* AƴKB$ZAQÀ q͙U.4Œ0l^aZFQ*7$|?[&sYhhNvō+Cb4 Sb$]O풉3gڈ{al$$~p7=?ΉtrxL#{<<+=_=<|jOdzZa4 #](I`قpyU fH)(1Pu{'E FGo\!Mړ@"2Ɇ>`_fd2[`4-^fn ˜Z\4ÕR 7,4p R{{`,sʻuĀ^X!PY%0.D{4AOaEgd =nmj8y9o@Զї @8*s"Px{փ]ݛ<:$rM52Z.Y57x:lSrtVbw*EydnSmYmi_bN(. 8|2q(d] ,õw+w騸8?Ā3AFbQGD$N>  p@" ; ~O&x x9Q fV`rd` "TGJpG 5IV`kEN1{白HZr8U\oR=ƭ1w> Ɣb/n:+nmy/D)\pXX`504\fKX}=aa ,{(W3r)&4djxx8dg5{K(( $j9q0o ##c.|q6,zzd3raAZ/P #S 'o\}Le b+pRْCq`3: DS1^d"ʀگ?XqNd4 V: ww>\bFЦ`D> M?{cȻH"@Y3i$@W;KOc-L[z]*c\Uj7q_Jp'}((YOâ*/8ss|AR+\W ?~><0`>|3rE VG  aiNgҪ~|GC]~glfߖK:;wKLD$#SR33вl2.+#*p%]z.s3R^m9X!k {QX6(ROʹ8Ôk$!HCgR׆>X3 $5 f$m@%"]WVn{)p-2IP:{Aޠԅi9`=*q׌5T 2p}@+q;;'5}cyXhYV X|Xe` K%*FTI/'3& ) 21VKCʱȽ  TtfZS5az=YDN(zLe@Vc!Vؑ L ic$0 9P[EU)AQߊLGx,L6XRuӂ9هS,N:EÒ/MCfbAH?5% oEy7y;;q=ږޔkÛO2`yC)7ËA<=y@_6x1 >2`/`Cuԯ >8zHJxN _Z^*PH-Ũ p|F9D*@}`>r7˵n8qLuQgE2 6^A?,tqE1w/v9uF?>j'Jjݯ#I7KdNfqf=*dl{kU! yK 82p Ib$!J%(m$PQu>FmY=*]ظha]-pd  q `֘O@ + vԃ`Ƕ^]U\g;fMGgaRmIc4(Ԡ[yr+bڒ¶(6iW&Fw:43QKA(9 d%'jOJcAA@7KQ$Lꕎ [^*ijdeaNoQo9JhPZ[e?Hi[be-AG7ߙΝSn 9[E=V;Z_A1'ز~d:W5pkv:D4o֋Rj^V*5J))D9u gK׵!Đ!;a^j>JFl\G=s0Ew>}+.n@k+j"Ϛ WY\=˴sUV'j F@u5aԵ|pV bNrwԨ!z=ܣH ckPZhV8`JKUFK!NEҭa6åBSXl0eBDÅ4 `-EH]pţPAԄjRQzNB&cMr iz۰o: c2Lz(Bw"XKX"bOdE:y榣,hWy(f$ >'5u_00X!#J Cbt3+]!|1.80U c( G1DE@`a1X.,UQ c,‚IGibK"(@"E"u +s)bن1^+pQ.t88kadp2fP<09j׺rɟHvpDT4-25SÃEԁH+j@\j3 L⣟A0W^q9ೡ>b`eQ`0`lV=0s$#qa`7QEy)+,X .Z9E)o,5B.ozTs4nERé kL]qMLe +1N(MOz׍' F}e&(JӁԿ2\݆E $정}S4'K6-lwn-M;.7Gj[>z4+_yi#nhtq6Y k%5#/f%RCF<8'}a S*ٙS`J3T꒏Đf̨\~||}T4ł4_.=mc0 7R D[ĔOD]Aڛʱ=F8!4U^zt3"L)WR:/SFf8nYCe.n`~_٧& Nnw6O χVbIkD+EkZ=R8PjI3JU& tTB4NnՠJOp'$Y+!Fd`rT:$s7H(+}ʝ ~ Q4WZ. ڻh|ʑIGR M)&:* t󢹪ީy֮fX-!fT_1NLj[f1`w{˓Ş#塧 =Q": }-O״,ෞ=]E_2 Ө Gzռu{xޕ6Bea J!X~,wcņm]KR0odJ+UcĆbfYqtI>Ra/qw'#Q-b񷾊*mM ǔZnWxߩ{r"ٳ/4>;>ohk`߃bl3SGUaE^siuYw[jH7Xv|c|߳eӘЎ<{e5bn@>CA&%{ 2Hw 1# #1w8H HPxWk N3i| -B{=Y.vymJ*E<҈$"#EQi;K,BjF>j0u/ZTϠ f-"4G#C8Fj!R $PJD%j #"Nk38^6JS^p7ɰLx\ aܝ\Ni"Lw]_4/R϶\Uʚ_=3#Zg{WH>yZ!1Rrٯ?] "xG? 7?q4Lnp;0Ӛ)-|,nL?Fۍ 0Odi\6tk٬Pd[˪&=AGE:WgQ*fR(Bk_;GMS#[|/B77 \P.;6b07a~<}Y9#T ]UJuRc},q̐kN5?XNj\<؎ѭ~ o I OdҞw bUpnή7VYe]pFFGm p4 kײvyX^y#c- Ϭ`c;uЬWCpj Ξ>9'>#6Fi8J>=̳ڵ_+Is'ُhY`۽#u7y*\VNYLyD<<YT;’hJ{J'eQi#%Tg 0zCCƒd3QChT[4EIM"@LD9wҴaoص*wSVs&:uZԋ R \c1PN6 ssACR! | w@„`,oc)Ydmԝ5–V;TKmt/O1U$߆XC+yPH\Ee ~M^^TИhCBB%3OԶ"~1i;M>-n!u+ݕha.kcXbE* Q0-4fj/`X4`dWV=Lg$Ӈkd:Ě9Njk\g\i]Lq" @3,Vlx\ jз^m@{%g_`uٽYzC(Ӹ2gt"ԱT}7k73)9n\7ր\TNt9OW Y? 52QkZʕ?BX"x#D i%eCQ8R5&**XVȵjrj[n3d9B`|eQ&<8M5y<,j&W ND:ދȅ i?)\'^]'+suG6?X;ULkku%CC`k"CQޡ@0(J"ɸ`J0G%ܗ.Dz+m'|.mπ!;8Ųr6P1-LoC}I|k!2%W\ ں侍q1rµ60N*-Tjg_ikKUj_mukvJ"3zۀlM4c=ء'N8l쉓3@ecwrE&HSDtHDz( ̮ ^`A&r.{Y.u;FL)P=iƖ~'rS,qĐs"ye֩zw7MXU&ϣ_J,Jj8~rdKi8x<ȭW+sxY?V|J+h;njtCip&[`'ӣQB e>-R%/ٽ/L[- &=1 +gI;V2%&qhN9hCYYݷvk㘩ݺw.-dJuu+Wk /i0M+M&/ԚbݲxھaS*[C1eӊL <",H sf3X0;}[lagz :=!RhL;L'ƛ*猰2qS8Ac.}^xG\ieķ挾2սU \ޝngqdŔm!]2YHCrNuAROЧ f2a}:iB(aŊu kFzO5 φx54=qluk[ռBDVS#gsFxz| 㚅g!_Кt^,O:Π՜d^0&R=8~a{{޹@+~z2 V5q, 7r| N2vd:uUƪ<`::X2(tEdTR+;;#.vV{xrsz-xjkC(RLa [*s׎T^v@HpL&E<8 sИӂJ_cjS؋&mi2fYGoŖh'M:~~(nӧB{GUooz`qO! .48r?Xm6_y}>o[0m-9( ~Wh ?i$G6(xAuZRt1Qpn➺>uEQ bd`-j "a@Y‡& &\!MړcHD#Y]+{Nn`3JTsQ(o FIS K'$A2gښ6_ae75 x^lOjT̼̔ѐhSEB=MRxo tR p}ݧ>$d(\kEU.}nm T?0)}C2ᤗN+0aEZp! #3),T&Xs5a 4㸼뫡{ MR੅K]BIK{^?fSm mG~0go0V —Ƿ<Ϟ֛A~| C2@<ÛP[δM]Jvh3.V=t`MdݒaWJTdcNKCzn5UJJv 4RFzdշRjvSE(Wm:N%(Û&v`(rN֟ClX(&k(2(LD$fZgH/"<*JYai3:O`]HϹ;L;3yHD(e5L'OWY5&5l|n}o1B 1-SxĬ 3v8%Fbc]ad6Lj?uB9JIJ&/w}&c`-F|- MY CXiy@@g+N_?MY9q25w^{ LzA݌͍yYY~BIiz@ F5y*3g6DŽ(P qt݈mx S0?;{s5kd_l&[jer=+2iyƥQƥiuj bD&>SN(NJt9x]*('Pݒ!EnZDzǍD  i.pS!zQ:IEɚf֒K)MIMO菳,{ټ=rWW$_ԿA (l^ `XTQ( _ D߻B_S*PaDU)V *!*O Pvfh4F\ KؽTRdQ<'JKM%Rh6Ɇg,l>voģ6ot}n6tY/#J#U(Yv?Jpk.(RG,(Ƴ(oO˻[~y~Bd-e}4LB T+ݚ/'$'ˉ- hLA>O9WK܍THԿ\+rʨjDv'+6q?|lBY!p,,4>Ά[ɨx᧟f&/r{e/Yc?gߍ\DDpR zǓ|4 '{]O/~g`&\߹#p$/s. uy 6Ln3Hk%@䊴ҙ#Nao'Kp'P"v.蝘o;5/BHJvbFKց޲eG)fx'Rۛ4ޢg3J" ZN|^ q4%5B*w1|ۚ1xJEN NVm;/89(c孩.uzcqk*ҘM=CRGjE-.3Cs3[BV0m#}#Dk_t^/'/L>͟$"fCPcnܿO513p G)Y2236Ґg U0y{?OSFΓUt0Mn0aR=?}ʧ~e?R#13L*bJCLսd5AoOaeꪥt0a GwBH [JƊ4©S`$*م։sմoKmV_۩.U'cK$HAJUCҚ)ie SCQ:EYC C "¬8rTs),rf1%$ J Xs2e:a=naC3PS61c@i"8DJj#LBaBO!`A*.0b^.(EZDPx%RK\2V炔 Vy'"J,8FBκ0XP{X ~bT^%`DH`ԙV[ŻE]_+)EXsp>? |V| zaOM~i:Ѫ ssdI/V 5(>8SZLxF_wOe*%{Un5iiNB`b]X[MV -ݽ !zmû]!.䮳M7ʿ%i|3b}7cWщ^%hٮ&qObSbP}e19LHua|+a#p6C+Բq6=ȞLDi;&4yǣ {!do.jзWG_xyFp1 ʼn% Y,/3f6L8X%˱eDlA%euCkj× Is̋K2v$?w`D7;O?^P牛c߻G}tK0Jj}3n[:H<г?ħ3B,7@.zw[ Ϝ~]x8Aת%H 7 R&hY-HX(a+fy.~8&\%.H$c,)Ň0Gkhا0C$w6Bo="'L %ZA~o|-Ӵw6hx/<|#EK?aÓ\8X}O-xʹ/O+ J0cfYy":7UTPѺiHIfe:|* ȑԇZ@W_FM/ZHZM )\|_@[FD-!,(f2FvwU ՘SslhUDfzFܚ%k@* P%X )Ci` NTjiBS4O$S-s\y>`U"$:I(.a$3 %͏pJ$NV~iTKrF4nuŬʫZKCr).?;Dᘉݞ2qҁșq:؝ Y /G1fF(fNf$]hgťt igZڶCi2L>fNZs0X_ ikK2\ `뱵[諒Բm+͛TMn$(Fv;Vj)-ՀG3}t[3H bt&>jed` O4 l^T2OwStZXX'D][sʉ m jkRklx sadUX'XDZ.$}S)|zagy#t?;.UA1?\]12O3p9h@[Hl]MGߞׂ^,ߪ@AU >§JGT*+^fU[n}Lo": 6 \ \wkTr' 97Xc*Z(:8" ^/6Ù*bL|Zpl"!*J,̛-ӚhRn%8 1Ke ;jCm~)YwF6,HJ5AVޣ h {fEME[*%+&eM)aX_ց@#N6_VJ5R \Kc]\Q5&JV N{7B%(+>Z͂0K!6tֳ֡;SN'1,+|.*9CRbp%q0s%OΎ;o2;w*JVo@hukS{:L1Ұ\,J& dw6y>;lpB>wQ'>GogGLO>w0+;p&H/XjAQšMhÖtv HƱ!d8MaK79;z4qڗ}H+>ϳq0嗯~6*ˣDCB~sad(BņxXTוch?ǃNjzy/=1n tA)48۟MznÑPTJ h=x@"(ѻ,6?Wn-cwiA*oΕ"}k{6x'7qaze`^xc*1~ٟuW.;y5jӫ, ބ1/pXx.&W.JoCa/s;/_@jȣ+￀ի/{qvʇK^eG- I燂)3as> 1} ?ͯr;+aTbl|@L*=᧼J8K)Gp\et*g"7ow |Da _?6ޕ8!Jٕ._ƋݟgF)L+ndal9AdRCB|!:pn5D0!" \8d¿gWQ_=U^ w} 0΋,F+fm~Fmn^5~WCnHq-l[ѻ n$S ׮5}y\*tρPcu/VPK8gq5rzdNj K'$A2$$#n1TKIY0Ն113S IABrWhy'ѐ)LRfijH"wZ )+si♣a|:uVXkJ@L*8${~@u\4$/M[V޽h~۶PmL9'Y>źu߽q2X^|ya',x9H)FO;?aU,9sHa5@A ^'@@`>3X3$VJ0HBeJ©5Ʃ20[ z(W)b*4C:djZ4bi(6=jtpr~ F("㬖* 64kx|TKї\/]-VkmdʣftPT2t6AIG:V#P ËUIn7 2jk G5+`5+B # ; bۅm ۰JʳMS1ئ츩b{b5ؗ+3!gUo]C&9[P9>qyQp%Öux͙G%z9,ұvw!\V9O/ypT("nTDLJS]иC_/8k],жH%FJ9_8|_9l)dsX=YV nryJwٝ^dVLĎ1inliv]<[8a@CR?svpcE=*Y,VZar3lUnm^Tzqk+-> =M$~KKH<#{ZnRU:uB9Ҕ MݍykAGl+L&@ՎzH"]2L))f{SEd "kjD5 =j5Lh+_rۃYǐpfSXrJ߫T6BѫZuuU}x OfSuyDA4A1kqiFgYFuT@]\f F{ƑjPȦLYAAI<<8b A~iS.(&L\_Bi59vYϤ1eoo|rA>lE δ%OD) ~$gYGrVGRak,fg .$$,n`L3xc nekp 5 KОzTjB6N1w"·T*ei)di@&QL"ge"TN1>|Gs|6^ҝg{r^f4Z{wy+lji@h O(BI$UI˨ /4NYWɆńyP(xP.KBۀ!Lqg܈87$2`HFt sA~X\<<5YI5YIuhN#m]8+ 6Q_Fkkۀ`ڰ>M݅b\$TR`M(Q&["FkSKҸZX0RY8֍X!70 D4Mfd4Q)R+OR*^XMݘba)٪b¸͒T^pQZ jVnuTi>D h F}:]B@״:Wu tמ%.q`vV@f) .imT^Qrr&hBǗ$0ƤUW˘Ap炶mkrYL}>kuӋ}D˧Je*2=kEg?cmBv{X4S+ހKG@p!)#zpf|2Irm9:o 1:V^gɏD<cBJǣM0Fزh8`cZ6^h{01ԆM`&bqAg.T e[ [Uj@S(F>n!HPE(]m݃2VxઓѲj4*Y H.cU,Bh+0卅Y ͉O4 II)NSk5e0,>-"z4"Bt`xR+B8$T o}-Z]bhzVڻTQoޱψ%Ohu\a.~/1x$IA:))1eIrJcɵ % f-j,h &MȨE֒ Ĥ\&CkBԁklnP5x ÐeZ4%8 M(Z,Ug^ip#1'iJ >h' ;! YƐTG?\1J9Q] "bHhSh麣0,e>u 1 CH@By(PF&tne $$6`CU,r%UUk+C'B- A#'A 'z#Zmu Ay% (ժ0!yndGG8D/kHmATk-&S!t7q\2(j o 9S-7"DqI\);U (N!E[9KVjQvHM!mHF7 Xa#ݙf0#tLbVC}3+LF=8.\.o'#Qyl@ҙ0ػ'7n#$7x?TT$eY)> aHw= ;|c3áJ,fg_F?p`zC$bhu?gby3O<~1zju"y ^ xn|3³s dlE;]LBGg>;q`k$x`gK8x)RFD2'^>j7{HdY=;8xYj9G!R{<\*;I$|+ (RisO+8Yok=)[ :;Cj*G}Ys>%;>Dp;s^s6 z*nrք$'Gz)ޘblT' jf5buEk#JYxXHZpل _V~S3j1ʆ@D$T4Qun!0c7W1A!# dj!9"}. ؗ`x),:`^*HťS5Idklt`?4]"O+-+M`$RAXC`FʄFJ΀9 'jA8yzl_>\@uZ'C œ`OC`%0-܂ Ae Kg1#m2 +>yf Z9mn&vƱSՙW99U֑SJѯW!SR\Qũ/ q(bda稔Š1U`C02aF c"# NE'F5 sKD` tj!>{y8^C1KTѩtYN_i$xq9cX'RRs8ng%;ZHAdiHJ[bý p=R"`Sg ZJs33ٟAXoutZ #YKjzl`KfSjZ5ʬ2h^MS\ h|_]}-ʡRLYE3*cCʤgU?LzZIѰ5bI!ܟuj4٥]3jf^}Bo>`ފEB,!ّ؁F=,s.IA]Ʊ]‹8K41LVDOXœYt{vжeOəDE,X[XP:uP K3SKK10Jм%L'|( `:*d/tmVĆ!*q7Q#Ȋr K7~>)*UɅpҿn'\~k!ੜU h'Xjp3A9KM91<%gEIloyGBRK4kpk'OV ty3YO6az1w8۶g`!:go%3/^q+ФSbd_GU;hӭr]yk_?[UƊ|Ǔ!kCU68}y5q8@F4FYJns PӤ/0lT Զ%C*ĄcoQgV'b"PO'}3sO-$R|'ߚc'$',άB+vx`YAmeb5Q4bArŨpgKJd5 4L#SrRX`'1SȑZVn/ ]Cvɫu\6Q{ݖzY6aт:g6`H&vcHi8I<y *b0 Vȉ@0jIFLmZHm깶(yoso rn;m&we:.)vrƖT{ENwNWS _9Ӧ;ؑc4\1UCI'q/U/{'P# ^<}zp !u5M=ӽ+pmՈШĒ j͵Tԗ"O_D%ݫՄ^.k1YCZU,CT:bm>:G=Ϝ`5>}eX`Hڌlߦ}+rBpU1pggRqڂݣWPf`)˞V1Lg>f!HQ8C1>etビN1!*qt j6Sm :ŃtAPe# /ƹpEg7w?}_o>?oߌ3x21Dݪk9!hc҆WVUi4 ,hsN%F$bHccTWPKs \W%nMB(99aI-VĖEQK ADKQl&3eIdqNrvV5fu<Gf'!'Z#*CjAE63tDlYׯ{͖Rg?\q1'»؎ ~^yLõ~:*,ë޿`fǛk0~"*6Ega&*M5ћ;T!ZUFWJ'Y3y-ao<\!iIWBlC`rnffkRK;TAP!>f/bOf $~rCOXx ͑T)3C9e?Rrr@I҈DLyİST>L]%P^Ō ?~Ɨ0ݖ̪x"RffmİxҜb+LОQը;aGsNDm;drn؈&7g`%Z#zڨ}-uH*L9!zUc^6Ok̚No'+-O>]'Wru H,LZ)(!b܃֥rO)C{J=mgQCJ]}.+l3։P=^Ej>ne.^ug.֛m":v4Kg,{T&ifGc3V =z]Elο_\z nz|UŹGکufCUgxc:ٱ0K9hIP>#+~ԉ]ϱG=UPԕh$hG?GzTr.ݗ7Ch ZF]px˸?oT Wp*ePW:rm෇>:kxڬy{&7cqK0WGwKWj2.VOWwV9#"NL2SF#B6V1Wd*g>˵Xi nK3Yȷ5E"m[LP K:YjPs8XBJA[wTOЕZT(PgeחXj).o6XYa9關c"x5nt;sQ=]E/w8Onj VSl70xMx%PrͪůxKJƂ2LA@tdn ]®JJmT=XkA XwV6׺$avJ6ea'cG%#'D$qG)9,IBi`L#F`R+[zdN\*ϓfǸtƸ(f\f|.vKyAL{|R & ޭ1e z0 Ӧ*B1T Lf(4QA;/"#6 &3Y8Hlc;NF-A7fϳ__s|I0Ɛ!H 6`ni}*װ;_g+^c˫-U37 m%VjGEVd8^10(<4x;(DaqF\j 5fKjQI _-I>H;ϖ}Q`qCCk ?iq:?z9/1a1r~f-3~ {6 5??R8?_Ƴ`KQvV`4>K{58 iL/>HB_^*:]ΞTI~^Ql9ڙ!͙Sm+'3'nLoԱn'g Y쁦Z6f7>E)9#=1I3\CC0G*Ml;gnJg^>ݻw?s ƈ虘VMzgZZGM]SPϥw S hyS y3ɂ~+VQ"_Ar{޵C12*\ pQV83 !L0<5o9)xx6QTkȩԻM*b-sq6Mwsh+m )<Ş\s a*{rw{rҧ'׃Jsݙ2D(}Nfk;fٍk4]l=9%Ǖ{'r6͖њKfoIC2dr޿*WlfEWPN:-qPՓ]QMU_ iRorՔ^]q=# jy,fP{7zOnN.,QpE NTiQ;.[^$2ve _Ŋx q#:0ZDڒq5Rho]kh갞 {O&kⲰČ>N ĉQ]x* qH-^\]B+i! oJ$-֩ G:C.b\/)ʂY볁B/_v"=e[DoLY[ һ?^^LXu`//c%#=E6;Fܩy{@ $" QLV#C%*( /qD$Hj麹.o/hJ)Q~%&וB7.Y >"KjW`# s}!"`G=~ ??t6_Ab/QlKE>0Rqe ؐY5>.fC6ߒ+6/Q83!)|ǗDAlx/F2s> =8 ț31yrzLl%aTn:Yd`Ι ;G ) j9☊" _XCYW^i==Jq&i 9)hߟAs'FsbЌI9NUqN* k;qoV~N:}ֻdcg &C)ԙΞcGҋ 7xo7IA/]LIF0Hݳ<,Fխ˒v{}+r1+30lY8 ޝF- )mG 67#ER8[m̮J[>N&nk{i62}unCo6ڡN7RJaNv=sI(Af̯h+~Q҄ʖk uu $Ѻ=B.#eHFsޫ~Z$7QY9?Ikj4LHO_Zn'j62,V@yec~ (rrցӞ}&gE_`tiqEc&[kU}q%'K\J)j1%. X|D&S|*偵SZx6`)5yQe@|wXkv:RV.OYN[akް1lkaюpv:x3 NFc!yEʥuE $3I U ^X"d~2o-Hi$&') Z;J'ݞ2# w 0+!P p;]>赳"#'(Sϋjԉ)7PC ̌T`} %PJ\'!ߩq|\q81٩b)Ǵٹмx,ٯI>aFI`|B0}X2cAo=oTR3vQHd];\QO%ِ/ ˍ~}p9wEM~kSέ3n|gۢzh~8ͷy{{:?0w7͹G|JMH<DXB.]̅pl_KaX|*%䗟n>ݾz``?y pc7Z]G06 4[ Wq\/=$g 1I뀥Pjk;T>Dh"`#UԗjsvKPkcTk"[Yw!96&%xW6N k 5 lڳmw.'yNpy0FȊ1W҇O29Hhuh!0DV$ER\NnZٕQ6ca3!ߺZR('Xdl:HUs#Ns)^[m}'`fSoe 8+%gؚPn&L++qVr̠ R4|/o9YZ j&≬KcTp$bſ30 B ϲ>To03܃F$xMiJD8Ҳvi7ؕH6hZI԰18Kv+> z|~7vխw[ix>{H[AbtuH#OHK% 7{D5x/cw!km,nX'Q1VhÛNAacOayAu^r1c9;(H*mpQNTBfիJФVi۱I~(häbd}'RL=ѶQw|BpceUZeūPٴNtY./ZS; }2t}*/.K*.t"jE||(OG<#\S2C!+~O)Fl$n=^zz\m_~o[b;e|Hw*|H˅eJX4CnLT/6jϳ{cbVBY:J6Ypdl`Mv8;@mPIsB;)"dHYjFg]`"%-U+yXY[icDAIAjyrL,`bl%{*VCZ乱= 9eZI=9_Iw6F U.ۃLfG>QK&V',dZ6솰3l LF٨P`jig?EJnEb/P7*+]d|boCƷ@u/2#!9;O!Pg|8.x-}v/U](KV^ZQDi^KNg ?~iv6w24sjn?/_4#Uԅ>; <\}zpn)4 ? OɳKrfQz`t'*\w& ,V??&677/nl4N_1XQh.C@skaC/`Bgiu_从1vp L:w ֵ4it*PY#^tOUbUP\6[D$ht- KzU,yz,_{Kc9r]}*wX|dXirO%͒1A,pHggCo?c/Ww| ŔM^$A*m1:Vhh2yPsJZ^GkX. 4T<8؁5冸qTQs.۞gt X2>;ؽ[i`oɒ@6뷟ͳ:Er;C 퇣8xEX+݅;iz"Pݣ[=Ճj M1J"퉐YY>T)kG bMj )e?Лeb"h<KLZ1S=E7*9v5BV$OH$Lm )HJ$|H99hcP?vvB-U*EG( |H e,9ÁNY @MPe$h\VB9fCmC9 %@җCneq r:#Z`֒S/Foj&Ϥ-*Ð CMN %k>'T A%%>lrf֭|Y Nb_z;ϱ4X 4]BfmM VaF2E 9RrҴ28I;P#߅:hD;i&F~ȞQV+5F l TdmD4aN.aŅ9ʝ[rx)/ a]:aƖLVL%k)nzc3^]D3c1'We: c_w?xDi{6߫|FNREȑ^XE־J0pĿvd8NU"z$T1VYLeBW$l" V1C 04bϐN.\*Vu i* 2Z1@2ɅgRKsGl=G[UHeQ]Ֆ#Gw鼣Ӌ<g֟X~Oѭd숁FQ9be铖>ZrfƜ5IVugO/1h|(4-;|'^~Yz!w'HI^HwW'!P9r0VluXsjZyP-[i9!EWdDla]ZedC0#4Hc+^= GPJoo7}; ivP,Q<̲fG+J-ct(e*.ʐbkXhe)!RA`ҡݪ(qJpុN! ;77dWK,$-BB w"cR`|qKMe?B`(z*"EiFKE;vl:aIε3#CW8GOiٲY8_Xd,Q:.*JsCa{W5O\y8eC"XIA UA7>pqPiGɍJ;}aC\Β"x|f繽ͨ .* 籽VְLj+Q"[1唴iҊ*pkh5kO\`MvpFϵ{'vnNбSF\iw-Me}J7Gҭ@VBz0&n:9(x,cHe RGR$D`0)%!3%:A1` cX&B%ȤTۘ:"(U1AIgAՎy420X&Щ@? B  2tVvd3F/=3 ‹H0J!)S0yo Sf 8^p05JP I%5NSݢ?J?p28b& iDҕEvfr#1}% V$ {%cݕo{w Ku($ksgdK0**MhT8ӽSWaO?pd,e۟앜(X6Ǖ՗mH*݊[ 6h7?~0Sd sҏY KP< OHGxq5n"Mgp; :6ءoVJqT)Ue41JLT(B|cǁGXS;ه!V9:Ne|yG!QwpĮ"EiKF}["]5H LQasW D6no'Jtʫi~5C>n@nѫ*`+߼2&:38 ޚ׶!ioׇR@8xiT nyf ~ݼ+Ro 5mkՂd"M+oaLb4ZRqXw~60}ȭ]x0&_Io rޱY>pt%!}*]f%E2/8rޱ~B.{XS=> 7e5WOjOgfvi[RۧXmr[ h$_uIy%a(8Wf CqiR,‰jOSg3tCb$kz (볡>X(n}wi2[uVwԼ9Җ<݌"\1qWPZdl`F~=Xi#>i8=`=-﮾3K`"ŷJju: 2 Gdt'7^mLRiDUy켲Eag]mpW9αfW* !(`A t QɎT|՚3W]2/Ŝۍ%,6D!04K虔)B4A2 Bs"uTtJx cfW+N.hLr"[)x/^;By L*Ԁ<0&3SD&LaY(҈Ű53a-y;jwoe> n6|0aŁXמ] P?n K8״WQW£Fš'E1S%(qdTF1 ) # F!HóPR LJCeG5#ڳ19ƹWۼԕkDK_aĎ_9jlC""|32Tl0Yv%{xwu)ߌ7\WkL+4?dj-IWgb#kzU#ɗgYwWii*A;6W>p׹zvzaXa%*$Hw1+7j}51R*,Uw*jtRq,mGgA?Gn NV 9eUo>lM.濙ldo&ydz8Zέ&2p2{d$ 4Qi D p? 3u=:w`6ñO߮gmW"\r8|.^xV'~P69=C]jr@kڙ;KEo}Z]9@ r5!gPΗ>;-In)̥KA)!xGPl 6HE8B l$7|_AOh0lgv6 b %BI7UWJ\VJ'OPDEzY'UFHEXJͥ(۳CpJW8)`=`+wQ'Z t'S92aE 7 qpbiC X8f## #C!<XJc%'9EOxR.aOL?1Op"π4,2Q"N I 4 \3@Dp=2h Szꖠ^WEZWh6sx&G?DY*j=HV<_f,OܑkaȻaW+j%*:𫟆Yݕ N| ˵S׌!0~x03|g!V] Fbdx<8.>Gvۮ@aM4N=5aYSz5l 85 i М* {6mBsN$ya xFy2?eܻCJ5;Ϟ\0Q/Ks;۟زfwTeq> s:|XcVZ~a2ܓ 8Ͱˣ\l}(\j18es*3S9"ǃkE~ =5VcK`$M/.3Fj̹脎n`_Euv5F]h*'XD.w(.:r`_.V˨LE𝍂IL{nҎI{4zT͛Ewfқ%1g3 %6*ֳQV!)KR ?i<$"LB2xc@ɞ" _ڐ tG\Qcr[@ ;4Rd0P ͈RGW;sx&+Q_ts :NVL:J+;^yf\aQɛ+rɧfF%}ؔ󎯇i. $Yd+qkC{RQ+^75[c̫cVT4Fdat<.?Ny]uBCT5ΐW{QJR0 ^Ցs48C)AWAcKU] ?Yp?? 5mrQU!鄱9 `b*&?6 2{۠h,&[MX{kiﭥ޵'@ZX4J9 E "d83i 1 T*6)IG 4Cvu{;x7Ld_~v!K0^>m읃9aRC`.z/O,XbJa)KEf"3L$L) aQ )ґHQx!ic: $ZB*M9/ \](snEu+B5}4%z{^CMd a>zzaf쩪Jmng-ڽ\۟팋TR<щ*/k E[fU]'Fw.ʧhU 2s[f3,1.h_Tv1rr`sW!$+2mr@$U!5*f$)J"0I)1\s1|ohDng weF녫+H=x{ڋ@H2%5fYWVUL%0@Te`0 7i.Pj {>jRQ&ĩ`J`"].m n2=yLU O/X;Tk/茨k^P~A[nfqh kvZRZJ%*ՠϭl6'qԄ| mt^(A\8SbݖzD0n^pdȚ5]G o P'LSB{^V'Abf_U~qd@IiƗT4Ilmaϝ2Tw .l5|ե|qfùх8t/MSS|*1 #4ϵHZZe\e Қ?| Iˁ7LfOqU;cyGx{mi91mm>1A[ۯ˧bßJBUfdfC\i)ZB/|f*?ALA$˹a,26/4:\U[DGn mLk9n"x>dIa<901GG66(iyjhkyx˥^u8\. wwȉa.$ϲ 5ER{NyVBVg&0a|j0!y;H=AS Y^eC{AH#+&&c2m K5a̷Zxr[\s:kKq<>3B}A᱑%ڣ$>,;C?j=uUBeuFeP }o*{{x{j..\/a@gw_}>3TʝZdyHF,]Y0zO,/~Yژfi=ŦGPQDGhMUx7A+;[*1&mT[z{Hօ|&bSBa<]͡w؈-I}G6NT[zGHօ|&bSUH2 A餾w3GW|1wK ݺoD۔BXi_>Ա +//A$Lb'0ϋ͖K="}חLebbJK/(?GLU.\?.>˻jٻeC?TVnRQ~pRi;^:ñRI/æMOnJ3P%ҫe,7.NQ Wj ¡JDŪ&j8RUK+Ϛ,*E14P'uee4EXiR8 a{ſZ QbaOgeOʹ1:`;0㌢͘P]Μe4}>g(Um_\9Jur {Mk_2մLpߘPl7sK,xn$4/S9Shrcd)SZ9EQ:G4Cl>\*/_TR_t6[]dD]s / _%<-L)QTbK;ջ+ݝ^]իODQ@}0x]*|'^2@H p,٧N$-N;q_Z$Xisf92*r2s4J2GʥP$8gEET(dLk$NtYY!Bƌ4gQZ ^V-]HВ,ʶNfٖa&L $k77߇~;4P8YqDK,$$QhIrX@2 rnYas9ʩ.,9)9ο觧MCzbWE AdV?g>BIй IKzZ}OeSgj?[R"PG>Rq!e,JRs›!0XRapZ#R++(Pb}#Ղܭo&R!ʶƲmCe D6燛\>6qjsG*x&/ˁ}_뜓·)üRj%EoUz|/4eMBVq$L^2Cl-j-jUtܗˇw7;O HgX б Hw3X8wnAz#$Ui36d?|Pz~5VH&o-] EdP>zy87_?"m"r6.lSSkw_+2~'p};ikOͮ} z4;O)f?Ky3nn|}8 Q`)S gZ>䲫Oc~]lr^;o> 0qKoɿ'\I(l]6 ޺j`'k T;Lٜhc"C5τsHYޡW?^=,y>IeWmsQ|DA)CBj?l[KOu&xPm% ( VHYpZ;Aa˥60C"G>߈vջU_ګ{_ :xqBDm!8vr ֺ`LPںV+JÚW@!xŦSD CNrU! ӧ)>pw(mJklK5IhmpuĚ5*EAy OnF<j3m8?=)ΊtbNΊϊǘH1vGxuT"0>jOnb[fCeB3_"89)|&m`Mf>2OK/MkLwNFH޶Syr0jJƒ7G?H*Wmjao#{DAe=0tyt5F4-bDMKێu}fm_>1;[{s;T( &>0Tb7J3hG\$5TPНT6,Z)rPV6myܙQH>7wRJFs7Y֙ +;oWI |}am7KSjJb!չP73HE<_M@\T1T.)jJ5v,JBWqfFzBGK;kT!誨JmE ]+sNa0򰺚₌.LP936v R>n"*㜌LJ7V&|g)aD7K-X~_h'=hZJA :py`LBAgPd-EAȁ Hق ^Wp,z"s(x1<\ YGaܦٛᯀG˕P5o/d,ǒ{AS ^PpOVFlJUyAoĺY }ap96y%yqs̅sn3HJz]ǒ&/St o 4YVtV3$R'4s^ͭ"aS?ynn?P9SdRZ Ea9>yR&G/'a8mqJe rc(m5Gyo45s| PFe S͏cJ* W5AcaU0ԙn**fܯKG1Y6DlQDB||`HkGZ+GV sxp'7>ipQ <~G1a)3qOiMF܈Rծi~t6MQG]E{f*Tz(N c޴QaSVմߝF,|:֋QM#\]>=cxz1JA!W^7QВipȕrtiӄ`fpm)]4(Th'dt.o\䗥!?~‰z",ۀ{ +kg]<Y}`p%; )ęz/f#U zlڌg!8B怳WcI8 #XFWzDCuЃ^A5$$dDp&ȥIf\Ƕ( \0Dc#D@*3R: QtdkPXtVoOA ]x5Q=W oI *t9K̊?&A\*n _*J+Фedm/5P-LK1[:U&ˬ.?*TTq " z;gҕ9ts),\UYhܹ*|`-Ye\>*|l8+B 臓}`%׫r:瞿O1t~5O{*οu >k3Q2#r%\VhEgW ,ߟtG1nΘjsPrh)eNZIfڭV^`HVsS{1%'ӵKL%hn܄;&Dk(AHѯW$3 K#Qj,)qR|&̐7>E[)%.6UI4F7 M● z'tyi{D֯)FK{԰3)RQ_jeO/A1b%rDхzCt(7lu0Ե >:gǨƸEQQY*UC=P@jUNIP2\ `cbepL%n 1Q=_Ϸg\K%7Ro؀BÍVGY0F۞iGUz\c+9GΝ7F {>mea=RWkVxqt\TO/g|[yȆ $JdpK uwL$i4XǏ6.^~U:ϵ~uޏU!?ꡩ+>?֮{ T]/;GTu. :% 6kҕjP k' aYŋ;:3Ѽ or;n녮9#X/Znk:!5Y7\3vڲZ#c_ S䧓*>qꉰ87a0d/r[0+Z8C6oI(k>i R0v- mHk~sEt{_,E8FV95g`QЖxIir9/X^\8@z_w62s!qq^sb:vu:2 ;G T EЛyYwrì%UxKdExQ (lQ7ܔa}uʍS EX=s{nGj =[熟l cgc=wN6 ϴRiШ G73c_s UMW@#[Tβ dޔ3d1,#%XRR/bMh?Z~涠+ɵ)3V\يt0{uPwږZQRTA5]<<ܕ[-a@i7 9#,i 2=VJKi!+$ Exϸ.Q``IAha+EI*F%ZR;tUfKf/`,Uq{_UPM?/J&:otn:΋7^@*s>Qj0}a_[O݇ssB`~lv=?d+Zxng2<Td"Td"ۊL GhdP[Zy+[/V>, `]nY,FpBLиraF 7Adue4VBV lʉi04 =*V۹Z𬭉?ɟ:cDd̘KS*n0*ׄB%T|!ph5N{D${9CpAeW%*GEbry}nK`5V9%s(. A]^.Rkڌs#kmWrz6ql jE0RѲZ爎RGT XMso;W yCLZKUN=PfO!]TbJQZme~M&SST [dLw-^9߯N|DL| t)6&;1ԁj.$qNR= /TCO}~4<𻥰V %k΃~jsTLoyIj,5]{)*$:ʐ F݀Q[\tL*Y/rBBHkqc]ZH]6K{=P#8 ]pB~C(xK7z-@zHK&\>Rl /W\/bQocDhL+8u=hKob b+5+aO]ïѴxmV^Wi9Pdžɤ{AݰPU)u#nSQLe)1ot0z,Y^B3bpFC.ܨ԰NĦjs}D[<`xU]=uvX4oS 9?sаɻSOrn._-Ͽ^=|?nxي&RLs8׃5L*EJlxCk\^ px(PR<7ȄsVV1,ƣ%4VTB,Ju \^5Ozwr;Z'<{4.>]]wCoTDZ =Csg940 ^,8S["#&8(-T"L*M)s9s@r\3,1EMvh-8T xI"%!JR:LT^;E"ڲ0h\8p-I$j4Oll"E3@%JT f BvҞTȰ ׅo/{= {,X`Ef7L6VS38sL\ xPvpBa<1`9lAY;QŌ kFE1Y2# #<8 G RWxqXoV7Ut(17Lo\۞ Xt߮~2L}u6-x>xe/c1G_%^jr\ܣ.o{q 8*4HHy_MƠ(H$󦙳4AYM(fM<B5GWn/5({X5$ M=f* YQfcPZ-c,muTb}K!VNi(*̃1|NHEs' 6ƔgID\#yi 2coc(Өm&ѥw9N٬M!v\RY+Zƙj$Xm%e#]` dOn@F]>'rK:U@zl܈' *4z`z6 k.jIΩ$/'wߖYiᕲ/pSh8o(b(b7ĸ{kٸ;:0yƸ [EI*3DV sL|͖m-_Qc>lrd>8O,_O7O{+`ftH5#o $F RHpey؅t[@K[.d(u#F@1BUԆ!Na~oZ{:u@pOF P> 731amQӖI<[gQp݈1 e^YB;0:v 8P` D 3ɈRb'2Pi#6(ojR wt|>S%F }1󥭳|w e ÿo!NoțOTI&&Yīt 8 AD,G_AQӏN8KM&Һ! Q_@z~ C"~RLlz<}:dJY*H7KL f!H!*FkT 'tusXHB̬D\z)uI]ґJ3ɗѠH9h,=84D2.=1#Rk\ ~Y#4Te 2r&b+3D:7GZPQ֧Dp6¬ 4dߗTb_/-fg$JPqcS$h%AZrm0FD4cPKEx305 wnm;:+.J`0^ yg1 N?ط|JdUQj>JWGzռ*Ihg*=H8Ja*ip IhtjQ+ն󝅙+˾6a^Dm&B ҌwaZNpZ5COС *ҁăШHbV*hFGj!zE[&u<_+*`9Yrkk k; }Z۝ϸ0 2 yVHcVģVj"* zD!:f0fygSIÄL yӒVWHYD& X+!$ZzWGKj(A7*/|&I pe]ߣCưHhlOTpi|BP)G gb1afaZH$QH 1*'C8σ[ɠgܫto6oAJ)JHߊXܓ x4F 'zr<^MDi+jjfoƘoMB3ҜH,;{@}2^M娴T/S̵_nHY;`Fˁa +j j6~JRp?aQ#iCв`|^[ e{s^B.svAޑ҆z.Vsv3;B(L :;/,ټ g5wS ऺV<#ףZ-ƥ[^GbpUF-. fIp !Wn=VSO$Lӣe>Xc (QťZbLX9W'scJW ѬG FE gFP9n(A96 ]J*+! 6Jrg.@T0> k m!%CV o#e~,%'$ے%J)[uUH{zb,\ܜvĊ`F )SL'aI!CXZYPC'?[(bF\i7~C!\jRIT]y{: -LpjJʉ; dSrZ A6Lu2ZKB.v++yKIlH/h8彤NS!0c SbU#hXØ%cM℀' pLIcAQ%+Am6a@ Ө1Xd@KU€FxUUkUbL<ֲ^´@HF&AͦLFu PvoMQ2oHɔ0hB1V osñ~d fTK{`FۀѡgY= |ZZoL ^ׁܙf jRy(ߵ4ʑ>hsڨD潾Er.2:yl^4E TeНeaT5CDZ #MfB\SuMvsߑ;euEzy[uSaCnF 7:8x}Ͼ3x{[OFWwpTBz:z+uΟbbНr+ⰸnMv`lyъ7-#aPe5,Ijﵷ#-t`3r3T嬆c +!8ܱU*QZuȱX! }((c݅;i?N[nkS~+^{A[^/ʽ"No6ZoA,8^b]azm&ׂdJ9Q{+ۄ2.hg}΍*هVI]9gP;݇\TާkA+y:fv;XٴNaݢ׍^f駁̜I"]|?v-hڰt?ݰԣ8"k* 4j1(B #ZӅ6ːU)ԇL%0FXqB=}Nwx̷CWf> A1rlHf~aǓI=n(jX?8,l㸘,2KAF6₿ Jπ,q$Din_ bkf&F#Z)ɺ؇aı!Jex.;wSw]_3_~zvxBŖF\<갪dz ϗ4_"_ ^4sWwgK4BV'"ث2Ձ)#A.dO.A#1I|<dR%|i,B<[Zaham/PBWyA]@+CՔXo]AUBCJ׵{xπSImmEfؑ՚^-Q]p_Bg_7V1ǩ4DZ&twQuej\7i@mʂ^>oNO1uC_zH=^˫0|QӜOŇۓVXW.% &"b~jg%"z5 &ifz{YVE_~0؞.>M8+ϑW[+<X)env j@ [O&LЍ0X-'Ko,qА+yx)~jVXR Lk[PecDMϗ)!.Sg-%}ǝn!|BwwW٩odtJGwmmyY,dR``24xM_Vv9~,db٭c#/ UX,\_Gevg!y ,ϧ$OqkDIG~;xG .Cƈ׃OEj"0fvpq}n3S!uP)(ǯ7\ʝVJ/ fEu,fjwljJn$7cx%hV}Vrgcz*R&eBE@xǨ!6NVi-Iv.ڕQj q׮kY]/ ]%Zm|z?e.KE3?q,o]9#S6"9.TrHխ]FZ^HY 6X! :ZWH4k֍CH[RvJl۵o[#%' 8yܱSlj&3xo;:gL%IJ):pܝ^Qvи]hSikMwq!]/a??ˇ/>˥ Ev*xWgrqIBiH;ĹQJ0feɍZ0F,2]Yx0y7 $YaxQ)}b:PH %Vew9.GVRa{[ɝ˻i.K*xwŒQ{c,c#E xLj!7 u8)Q몲AuUť)Qf d6]}8dWo#!)1IXp:bWk%O'Rvǎ#DYY|(vC5@$z^cXiݙvUl;3dv du/5Jxk"C:zOT07\:Q5&6}ԩܻ.S=1JS8hT PZqɤ.iƩCc|1P,~휶 ;ҍ`ֵǞX4WQ)_~VfjMEyOVhvB7@e5D)JKK.1JX`IVOP ^ĬTFWmm+D#Ug۴(PiKEMLMǰMSᖌ@rhI+Ҟq5~pueclDFQ tɫ\MyAc2-􌈄#(l)eJ'q=aD1q.,(O Q4".[K ; [^.RMJlj#Ls´1^t4PUespke 6 q]م4/bӥj Y]a*NJJsr(zqv~"ǯp 1SH4NaW_{mM([G@(q HBl3&D {kLr߰[(FUY$-Q-bd#c A7es9І`hxc$>fSU8:v A  bvk}Ej]h׈H"،MP̃2~^Hhn+?^[]?Z!x#YQhQ-ʣb4wEțy#M{f*"\%瑦7\mYxY# y@P]g j(2Q6`D3ˁf}cЧ<*C{ |4 %zr\ u)7[u)yFS._T o-Ax=;F@luF%R` 1"27@*[Or! f 睅{؛"{=Z65սs$䥘d(d(xqOX@?F<5:oۃQ ?~iz.Jr{҆[(;bՃͨžBZ S{<@7#k5lfU%Z6$v٨$v%G! $vCʮ",=B {5k$x{vzds N{- Or ֢P OZ^+p"rPe,@H 4 Q4qBO PV:uG>@s\ečJfc4zz-x-uoY^>Atr ש71aov2os\9}5z5Z>.:d1A4+TEVDO/MQJny4:*䟹?6%lv9' k']_r+~ThC$߆үghBΔpc*4M=egm۟ZdZ:A4 P9ј8bUI(C4SVs t%YJqb8ƧNLXVr&ʀ I~8R 8!hdL**0T:URm y 2I=;70aU|璖yx|)l>_M.gW?hy$_}۫_ә;]4 wwP\T|aC/yBPRe:_7 kwΗ<{9zFլ lP%yr e Ix#;̐/ #w]80:1UMW:Je^^B+ P'maZϹuynN.ـe[CZH12SÔ}AS'3Nr:-kV p@I(B{oذDXQ!軲F6g+Q?EA`z+ oA(3OV#Q͟JAh= .C+*,<9W0thue["-9@† e<8iH*ȯ/XW;(%V(S$ƃ0bF'|bGs/[`RXo1$X·k%[T1*\iap0sӿJ:̓]KoI+. LIΌ|Øbу9^0i[Ԥ<>HJdQz( 0%TEdDdfɨ5DaS$'H0LF zӖ JHw \3a&LBR`=FIYnu Rd[YBLlF)ħB|)ħB|Kӑu:Hb`@%gPL 2 PG !4 ]\{Aj/>BW]6=;!ES$!BBʀ9, x"EmԞşVwmtm@WSkʞyg1Q &pöԂ\e~@{^$tY/!_^Bkv;+>jWW}ND+~Zz˫scהVK63,q5="֫M0WVc:W[W4o?9Zs5\%,ʢCHǙ北w4Qn mm ~̣ +Fg~F%[%Y߂.|ͽd}5/-ڊ9C _\!x9cjS/!Ҡ&GSPו-o..F]15j ' lo*UA(5dICDNX"[sBT؉uJ LB[Fi} .3P0T 9`Tpx7[<j%&uAA(aNB3,>ui hP?v$( %C YтឥV)dbfzz{^:^i5pM[z\-qtBHBZB?r9Q!`T9mS}$1M\37+ޑ[IP((dEE@\q['Y;$ T˂5_O}ZJ7/u_QR|kc݅dp2CJ^=[L{Uۼ]\ ʹo\23/>Q0xG /kTD3!UAPn%+Rr[|qG` UmͿ䎍T{{%Ran\֬Е戔CK04Epy,7f*# 5A9贻XނkЭ~4>f8Xgٻ"҆8 W% +JvaPKǎ&*l?=]y*Wd(#&\LHN.c/pZ(`}u>pTb%Ά$!?ň$:̬I;oQc(j{CNN$*-6r4F#,EN$mZD-8.vp*YޤdӮDfշW_vJsG]+h!(:›rD|Q"wK ku{ =>o;|K|*!Yx׫#TDxnD}.Q!8 ݧ%+`)ʣ+^؛~^m>VKisFzFaI=18E !I ^sPQs^8}208.AAb=JhD P00'31P70$ʨS4௷گ&/Z1Z0I`96xJ4fcZ2$0c0Qa0Q{r$MV)=1/TAD!)`6Mt JچhԒ g2!uP-d幕1LH*14^&z 2y15[78\g̓Vl NZv҇9vxA ?b!` NgcCe׀`N* YD?|aV.\eOYSu?zwkJ<)`[7=,pnlzx6+.ܯ.|~xŷe%vj eSU RrSXSo狳$P9d4_enS.dz]t} t~ jET7Ke! 5sVhu˗Z"l$!j{끯5nL Y#N=*^>K9(ԥq/Mi8ˮ=z8G'5`B !:2̜08 ;A[KZme6G9:@3^`Fr?zI)h*/4C%V.҂CN퉠WD rޅJk."'JB*Upܮns=k&"gxs|=Ϙ"k"wJgp; }j⯧ѱj-Ob;TBpmf{.VEfZ~Ul6^۔! S.y< %h_&Vf=:[_ڐ[tmշ|EAbv{MY!Ls&g!5j#z9,[I!66|@ "$jC 6!+XkU9~٫ʌQr#' AJ"TB^9@c ;F!Bx#ʇ@c If!yfLx&;D/>xj_?nB]4M;hRKAJE?AD%tܱQrLH]ZUwVM8>j[JQzܜMFWtzsvN[O\ɯ7btvqw9;ͅg e}᝽zw~ovG_qPt@r/ύъHO!"H "7B)i( z/^/? ,cg-pJjp+~oCVP3!p:tX7x}=jZƏ' VcYMC}f2E82u7)%2IYpENUF,DO)Ϫ|Z=xܣ6-|rMԪFg.b1s YEYߐU]\\c(rUt$ohF${7ȕ$~"]r;,;D E?T>YCn، a'`~2̍'%n}}&M6ѽc#'uSXDp-Thd^*D7&32y++r<"9V,& bL ,y]PT5=a )3SD!Z|ET..0=x͍QjﶢoY͎\uRjBp&'mƩ$j'aIS 7Ndm Ьf˨#8ix6?Rs &a,)A*9\B6ɴ%-ʦp+'T*UAqÌܘ]Ѐs7϶ T!붘/Z}]Аf3]5%΁#ʗiTQSVERA[#Лg>Ź9S9y''ARZ_22qLx_Ę|Pn< 9?e$1HxҦvД:Bk=R%fk f{E7R)fnj6w:6cJ5qx[>Óž)utWЄ%k.ߜ.3҅BԂcWK2W"XXL S"xB0p:ĒR31%h惌!SkOvA8)`~ؑ'R#8.~5*'b_"*;,.ŕ!|<_G{|SrpZݳo$-{'% IrQ.S[ngj~Ƿ *] :l^_RG!edRdTen-@^kNmķՠ68Z[Q=Ak*L"XLA18bKx& X&kFN}vdٚlzHz@Xކv%9d7.B,)ծiF}P.%(JDP:LAu%Υt[U^OLƨh 2(- Zq Z@ ւB͵aϴWOh{-(X_lqao\,.ėy M*UV8;(ߟpˈA'O0J5FI(-i ,!uP [qgmX39/j4CHÏm/o~Yre^,촵 .Uw?jPMcK}5U_sX G F`dݺ!r뎜ͳOġ6cGfCڑ9MlP^y#X/m`U bQzԜph[ppWbpBtp\b1mE`!up=.w(4T]ćZӲħ:5Ɏ Zli+rBM^[Ug\X^[M9*7S*(I m#kLj%Zܴ3BAp =h+; И1Jn4Uh3QB%Ѡ5i)-h|jЀ &Q*h#i" hP(0iZmT3I"fIC<1]IBs!9;|-$MrEc Vl(,1E|* yb9$ԥf$bf\;ty1D=b㢳3p@"*pI*2HGfujk(& 5tx͜RH+4HWm@Ti 5&@痟AEK Pri!tg fQeb@ ѶԵ2HD62g`hesg p I#L!΃'<*` Ã!JtӚG QW*>O;64Q8f}`MOw츑F"^JTej\=]*aԄ@!`eoXRNQ]`!TAic#)lA\7j_V2Q|WGT%G2=oAaYC/cKw3JW@0m3ȯȂH3h Hr򨢢IhN>|bG2FyK äB]2<4&;@w@4*@l6wî`+@6K eNbRzE]ϸs=U`2wD j?J#3SA)̴A:*7d0Fn+b6@G/cb2OWLrbw[(W[ 0Z<}{XjrJc^VP+c )B,7^\ t0.nP-#f_/o,o H@V>ⷳ+8/OV.HzRy3_O.?x(*.$9Ow1#%~Y\8d1*I^vơ+k_͗J*skO-o\ , Fh (I õݯ|]}ZKOR |%야#E߆`Zx0Mv zG0Z(Fb U3&rJY)XtҔ` FxjP t@T[lޛڏξTc{b۟4i՝}F)D*/jX*3R|癥s[0ݥϜ߯]{I9!t0BBAbkp2gW$nCƔ!uϞb!"ͩ MW 6ø6!C+8 lhk4z8]R׍Ȃy;rkȘOGO?;묹HGOjxwA[2AL0.q{믞Nޟ|>-e',G3O{~OvV =0nkzxbtn=E]rhn"ne磳Żh380![Y l&c;Q闣"eLjͭ''_$"(j)|X? Wr6]*"{?3/_(tg٠pA1^t_,D MgFA3Cm˶ڽGG7| 4m0Z ;x30pfrN 1Zg-j 0Lǣ'KAd=yRl%۾4ݠI酝K(F .nRn WЋQ^*G"aqZ V/P1FCJ.ǿ;ySv&g-3b=Y7.;X 7J '%p`f;.*P97w>XFv̯_+8X2Jz۝Losuyy;;??ӓw8#УѴyyay9#ljB뾁>-\_[OB|3;CX]B.9%-ZLb9R͎9J2˵t./ٸ>% %ݎU]Y__E^| Bgb(t_XP57#I/3lHzYfv̋D&=a=Q}j bfqdFF ٗ"tOۓ_EAAzwKT_-A+d;lb[į&ldzߔ Xm65,V>;VJ6!9-ⷖtswIY4C^)Me* O\q ЁX8Ed*{fNm/-w?*vE0^1i-H)HbԒ7€j78l6䁔u-y|x]ͰWF*=pj[p 8ر:}Hهh#٢V 뒈)!KsLrJ?ӊVSCʢLtN4$K8P`rZÐ[&"Y-|M~=YfF*_F;20fw6E is bK~ĊOě[=*cPe6ԛjnN_>=n'-8j y#U = PDva]=uQ.{X۫ck>EΙz12: $ʁ"XFR[߲ߞ-,竛RDoJIblmQŜgw6/U(vJ*URf$("H:\"I_D͠ˣ4BECfn":Ea"Z&(,+?ǟ:۵UĘo23ubF&1ѕ~g -T^#WCM#DC}ŽPJ6K{}I/1N!'ToN'LX\mÂИsyR>@帒gK!1ᙥ+)/nfjp熝@FK$:D귍>mJ}&dNb)׋nSdMm%xQXQQw16dC&Z1f_la|_Qlf)<} Zێ8 Z ЯN"+{UHU.ꇭ(l*uB RƇə@?US}?U G;rmN(hFIe~zZ)Dhx !dIcY۔H ِc'"0FY @29qjs[skWXH:ǧo;_Eerwmy9 4¯u<BCv(ͪ!ͪ_8Y/+FiNhYhR>n%}m@cdhS"ưTZa$ i3 I Ribfˑ>K%58 <}킕CB'Bo3?ՑׂE&[}n~F ؆H!khw߁v `Y2Kl*zb T-yc 3Zd7-n9SvSs!=yׁ+ : s2E7M4]Xe9FLz1?MfSވ--N;7\&O'\Jbx_%W:.~5mbZWklj`2\5쨍6 zbh:DZ >}uZ7TǛ7gEWU!'uy{8%vPQ#?s>q[bcmzbޢx;օtF\m9GK9ـR!@Hf#* E`kC s\}|e Sw8ɝADhhDcP;gϗqv0RL`4-t/ZJw,? #Lqc#^:;@gɖй,"ZHvx#E@?N%h{??=0_ -Y_j>;L~OߞV=¦;Oy{\ݟI_onB6 J)čќx$8_V8n(e4u E?]HS,rE&)鼓6Ct Ed9(RMY$$K&U :[+TVٓŹ%[ɘ|[䳫K?ur]Wd^7;\Bs)_7(Hl3W[gxÚP=p:eX0ubhWsLrAy\j{ 1|3$7Нzq jHp۸V&ͶVZ4zi#%}5jQ0Vo vvs۳ssxGD`&UkRGv Yp4|+91>-i\zLLNX4ӄ7Gք7(SFL#ǚ-0+#DS!_TmG`kkdF /rc1GMjAՠf Yi>۫0߼݂`5uW4\O.KveiR xkaBV#8R&5u%1$^DԋzQODw F`ưQZ^8N0Ai3Z8иKp׉:u :rk`=gq|9Mr>iU= 2Oms SQ¬[M9 $ -kҨ0t2JE1:KA#H. Y3VR s5@qiVa%ŭEYfQ[eb,X@='.mX,"ĕHʆ]P4!1w-?,2䐚Zgg>pQRCͥE~{8n֍@p&=Z,5WFY"ːDD̑lRdr`m`4&fENS\k!Z|!q()Y5FjH bf-jե6gV̑UJXOE6&Ks4NрdT_U#e(Y k.9=g\-Tyqb?/V؏\<^$DJ Յ_.'㻜ˮ/9lNɞ|u~R~ˋԣGO ;*lP9Te=K}Ro~7|B*Zݒe1Pµ03%u~}w#4 g3ag-7?/oD_brLc:K>+Rbh>_Kj!d*"odKPߤ7rJKPMըX[8^n5CVwcǍ]_V/`Vk(exlU:^ŨRIŰ{kbgfp+I 9$-@j%' MaAH2cq&eALH+:زڠ%cׁF|c'e4I{r(LDFZ Ne + %PErtF(vvVʶ1(.npSh-yNΖ۹323Phb %cޱ˳gW33|R= `) vm*K(IY]K2?zc9VrX$8-ϒq!4"~` &l,~ERJIf X[K#[%kDQK"jÇi^v?| jjXͪ*/[1 (x5E jMiP!=Fajɢ~+vExB KMu q}/Ճ߆CI7ߺ|ztn'8j%OuJkn 칷Bȋ(`ȧf6-9Ӟ'[L *:m~{kz4t\5$ ޯ =`XgCOMn}: a77Rŭ]gkD^zܖ%;L])Q-Y`v!i[\Ƿ/`2z8`a=Qi8}щ@p }KKMb_YQeX)C崁 ۴ Ivxmn4L&fg$ĬɘW)$ ZVBӅ8;qHBW LF(-.&OS 3QY g[jc/s-R)cfjXq5e<`egEZZE~egGZڀ ͚ ]Zjh5Q N/TvEnMd(meO KEB ҷ޲:ޫb$;^T]iFS϶gecUT+0M/ұ֯pnr.*+ +O 739ؽd%0H69VJ_%mKڲ c[fWbXRZ]V.Qؘ/%S.ߖE{_\TV-wF7%'c^I//.¼b8CJ-.|+ݝtbZ[FT46W V.'-I3F2j:yMݩGi:F}V@iLֆUND2CLwp Iyj 6зaP,{g}3OqBX뛦uuJcEAI㉞B(Ś>NٗI K՞J~ Ų ͕?JloFEssbo.o!˲2?wfbZpx ه-19&}{`d\˼݉FDjPVKR~>)<&{)qpyTk0Mx ?oMb-'9^̨8%jkNtی7{߶VVi>_̾T9l-C)IacbPice{lA&)ݢ *m<T#CpCK+^`-4g{2{dKÕ"Vk$,IHY{'A3 >R:z~8թOՇwYd¥K-j@b9]ǚ H}sO_o2<#$!-e[0] wsXZ{=f/iêljswIQЂm0/2D˜`-pb԰)$ŃԅSo9}t\Q\q-hJz7A0x aMkkT޵Sm@jMQ*XE@dJK4PZ@ (a}uFX&AE7^ kPʾ{NQvP9jhO2WAjLK@ CZ1tzK55T:~4VxVl!RLىM>qD(}82sexs^܊ȊAY<~8h 9 xq|ƚxS bpB%9̈́@I#*gq<: /th+L>G n{U;t4trON_)Xڐn,۱̵.ųVx0AmX\U0V`r^NBS <vc^D_i'Ԋ0nc& 16 tH^n. 4}7Цu5\% ̗AaN )娴>S0Ro8* J9Wv=SCQrŶ +t0)OI0?x{b؁;Ot3<0RTɖ1XdjS9TCtiRjoQi.Tt|ֺ HiQ:>QTF@Gtp!z3z?y1=x+@}X|P╡ ӿ-1GPtq2')rf $d. *U/)[8eKljDUtQ㫂%&;Ȉ`"!-\EM 9 *lm:P#m:+QB{|F ?,B@qE,nQP\:#!ȍpBhBZAZъTkEoErrk" ƴt^,I`&r} 8Nǀt 8Nǀ160O"ELJVEds['A|@6,)|&2E[ms&8Jzp?A nAHE#¸B22p84);PxBX!F">2KBr!`I}JaX*%-x~8+ΟVMB#Vƒd^@qUqsa  XShWJTYA70A3B6j+~ʁ|$4ET8EY1t6ˉ aAsDb% q'@uq"Vs'g}\".Ssx1fyi@w0zzT\-ˏ/ND_#^o>޿|\y`^,4=CО]q џ ~~M|ThI߈7տ9ŷ$o.Vxu~'7цS{jGgGRW`ܮQ}KopRT"7iu6`7lk6Sg 텖'Jae%J+rF-1`Z8hY(4-b ʆNKJ:ר+p2\JGvmSP|ahQ+(#Le /Մ B5Tr"0-4ʭ*0F/m(%W FujC @e,J9&ĨbJ88+ ց:T_Z6Jr$X渇$>h'ONƝ˸2.w_g֟/nc4ԈtIڠ W1EE#qFY(T"`5 G_].Lfg3}r\ēi\L=ǸYepCX;1 H8eҚx:q*8xj}moK9r PL$o;n)by5%J ,kn/oo͗ :`Rk \ "kC8`O onfk"2pGoXo7/6דlIXd3qC3H-&p]c*<։3~0Ma~×E=jY3bj)${"A.Cl]Zr,}`[]Ъ> Ad-!i{s\To{M}L]}l/^@0E/{xwu=Y ~o ȓy5YoVyO].Ƨfe6A3W:v(z6? ,"ˋOHKtdyBYXN0x拰=a;N4Q9(r|ұSKL>!N-u!gwT:h BbjV"dq cy-{d?p  F!eR)$ÐI<Ј|9vpjXg1Rvt3BPq=Uys6 ))OF֑9wup6pm]Kfe afK=3ȅKVhiS 5$b NHcccERqc0D#ZQWۉ5F]`&mEJ2aMyJ?lp ?/q XctDw_עlo<2Px\7aJ(nN0aVK@pIGo!M*FD*b?&9[\#:,(xk\Ijb]xnMNF"ϵAA0/=!-^8Oc>sT9L~PCJ B\h kRET卛,6=|?(foTVU, >?[ 1 Avff-t#KtS#$?z(Z"Ƭctg-c,!yo.űCO'o7_y}3'Ce5fXҥ1]oRg;ZǓ{U$xw.N. 6Tj?ݻ(qN+.L ,J^띋EWK1DZL]9px \ÍXgIv] ht^yT bMWcHy B uBtxawZ K(`H 0$hPJT~< Zsip!Kg%k"2JqwIz}$/vުyq1,rz{]e]ާӹz*0t[u%PiRc/6JD 0rhٽ[)'z,g,Wͭ<ۧ{b9m Q蚣 J>w= |Iۉ!2x䥳1<^H:kCrwGdd2T벓'j'$0@1$p飽sYehT,,9 fb-=30z?GҝaB k'9 wY&}ޛKp,gxdLaĜH4ȔRĜƊp KJӘԂ "U0 1 =a} K@JY{-["Ks289JoV*χ02Ȑ'Q–\ סR aJ)a4=8E;7 4K (:{ ěʬ) `_乂))ʁ)٤DZ#I f>Y k&< Otjם0ΑO7dw;]!?>Oّ2vсylƒBvCj4ٽ`˰@cTJ(h{Ν8QGS9 }6Z 8D7ZI/u'4W_ܜV˹wEPI11R8QD L"! F"1AZ$Ҽ8 |^Xqҭ8xu/d}LzbNnc P1A A/IY-Nj?mSLi+ ` .It/GfΦbdd9O{:i{C)'RaW+p Y`Kڙt?y3hjٕ$?{ըC";Bɻ]yWVʻ0hѕ\JaYhuOf]yFʳ*P]ꓗU`mcA& \o.A۰>1m'k"S&>pڒKʆ LAr9$I *$k!ʑ 0O#*QV~N"@ I$U1ZĐIŵdSX3KJ]p2JpBcR{ Sq#wZz H ,Wze*Z6֘<p֓p;"0 +S+b0b)EP$H+b*Qp yJL'VxM,x8X((&DR"b9ʒsOu`iЂBbz.k{+ԵKE,ʤXR)52x[z|*19S^+ẩaJ=oD~?ӥjEƳk .ZuLMek*Ez >/. N0~E F#ZeDRE$!VrRKK1Ҥ3Q+qvER- ʟ7'TՃ)iͬ jX}z'->ݝUgj'nU~p8,o}J?84s7?eN-XOVfO}7Τ7v%>,D~rM8Bv!r;0VYN ŘՎ3\҇{_Tw^ ІvÞVgF?ncu f .Ip.}ݑKKm/?y3naHGf6^gxì3#>jt;n|?{],٥Bﮅٖ|,x,: X%vVfʊ]rmP )8chP&Sx0i;KL?g@i0 C0u #C). g;G͕^S;Ge4JZ ܏w $;$"@;"a4(;a"~!k%zC09$K2|*g{WyScY[{gz30]>t-2;,bXtzN_57-|bQiQH<M~ hFb;7~ _Eh2ܮ-wU+@>T>Ĭ`|q=Y%.q{Ko 6hwwmJ9brl86h],E T췾eg(98lK.Xdrp8r n X^ 0}Pq(&g782%?+-4y~zn(k;< ;-˫7gK?Y%{~l5wI c;V.G+~_VWn\+ڥf%V\UMg>qJF;=}g+47%S0bzc!YԜa^}cJe7"Ao;'JVlnD (Bz&+ܛnyLK[׶0ۛL֘ꮴUi[c])IPڀ`jkpiT8΄j\l!v&T* t/|PC[F!C6yd$L-EZA_E-LګC)×:f3vVZKsgYQɨ$L򭎠a8VP_ 6҆KDD5 - " 8Rj`^HNI3(9Jugg||^BzF)R߷B!CY7K2{H&N^ހ@M(aox;M0ˁr̾5ڌ`&7肉+3v^m;lk?K +76h5)KXmd|:`_y30aS UkbM.Βb E!b#Pe@9 j׌qg3e<n37:\j%l8SSMް??eFQv8?7!v4E?l aTQ&nw~/7մ5oGy֡` _l?nbbzkNWE?𦩿 "c.0N"LmB(Y^M~?ATvVk%v&7AK&ԯ ?w1tѡg񙾝m9OeۂYMPf1L QG<I#iSa$OjOv0-:H+3+Y{ uJmu+gE3X3VF0,bfꄐ}ccIrdi"{u'SDEZlŏG1b{ Jzt81ctZ`ѩZn%w-+3yןHb-ZKVL C IZFDm4(֖o[U`TX#J"#$F +̢H%L"2WJ$k7n@14˚;VfNPs.a$E,?̡g .VE'z*`F)Uo*Iy(B⨋R2IG }d cD^ #Џ0 Y8eY"wzra?S+"w=.  f"%O.4\xnLd9- 3uK1ϑek {.#F?Wɹfx՟x [ ;^qrX~W(|!0#<;8Zܾ>Jٓ^v+<ܥhv@+K/׷ͤD;3fbbw^ڡȎEɾh5 KK>sO.e ygZYby(v&rOM5&d'bc^5b2$;%ύܡ7xC9M82>2$%˄ Y҈ ,(% @ F8  ;1|?²L]yJv! *N߅ӗ<1 ;Z݉! ,ړC^9"EbEe['Te-Eќ*QS:wCS.\ʑӖ;@ݰW{5`b4˞ՌNĵ<%Fux)&<%p [Aث`T%XVQS\BzYVͶwWJKa%Pŀ;c&#S{(AZoMln&5]tuhWSxA36{9_V:˩'poU_=}nufԴkMzlgЍ15?l~`xbhʄ&& ̀uBl.s{IpA K[ґ!vȴPv9"jظ9 FCӇ>I6uD7JMZ?n|>>R~R@ڢ7vk4O=GkG?/~@ǭt7un.ߏ>޿prqA]6:cl'eLo}s~yuяO_y}rɟO~ ' ՏWf߶n&Λ#߽P>mއɍ{WOj&{!>{&B:v}ubRg2gl:M_i9nϿILgz&n1)bVZxmOvٴ}W 3JIt }iUxj!~2}vmЊ'TΘO loߞo AMӫr7` zMg$fOrwlLn7#ޅەwhxcok3 T9;+jwo] 5Q$@OS7Oּy~~?1?V:9?o_]0Xpv 9ⴅIЬ*[N *ēVu\Y$ÄIhvLkރ^_7#ő\`(D}'L,K8t!or{YQ< K2&bIf`avK^G " AЛ"GӾtPœC ;b,sJ1X+&FJjaQ3󏮹 ƀsT\ΡWb:姣1c?%OGY<1ϧKO9 -pPxE`wu(1]}$ӧOścvP&%2@ۃO3Cy 'QsTvN:1bQ (S>C_6b@m?iҙt zjkϚ^kG,YPm@T_U[߳=QrքC^yXeyʞp/Im/_/Q3u=Z F>8(L fq_0sFgf {$6@cƸ^t'U3%)Su^#mX݆e(۽7&ȿ ++x0̯PTeØGۀ: d0Z?,7U zS6YB5.zVZ`Nl9Qqh f tE1Z<|Eg1XwAsrC(D3[ߌp.]`ML8 MW؉6LۅEP%:D$"DiEJ(e$4!d5e1RKP}j&67&αӾ7αao& [YtO~ʆ8Ln_/5!|fd(a~l^y9Lvt9C:1R&q¥Da9 C]8q"#Mq…:a թfK"ب KrRcL8X54 J:\f̥ Œ!ZE)6>ZhK$Q\Y "5TMSeF#k[hn><]!rRnJ [FxV7[/V˿N5CFj\:Tp46@A7 dCppy# gð;,l_GlDC( ʅ+ ʋ !J.}bNËEZ-)9϶OQF\xqw،}M7'Ϸfw~J>&:_~w:ߓUu?ѱŽAۡ6[Fw~xcXӋne͇o;Ao-ҭ?f*p_~k&/ W 4-Sǿ) &h6 ȍ}\JJB']\\JoF J CLaŒzEh€4],=!*%E.1Ҙ̙k[sp + 8I\XrAT^7ҳ6HO={FO9U\ U[lTSu uP߄Ə AB78؄ܗ!* "pTe<2Z(Q(8`IOCSF"X4`PO 9!UBSt{j@MmIE~ /dC/s0bk6INvIY(JɋA֖hvyUL4ĝ`jltM}g 0!C4DzC08#K?!U&23,U35S3) kG%$n U69YPN[*ʕ2xsJb~mf Je&b?nb<ϕgmL+ϯ2ኁ 8>ײs ?U,r[*a+jrӟo/Lgs"~. ]_8+ 7t7s4Q5D`%ƒ!l%JUL3kn*|RRXÁZj`U%B.KB…"PhnH[ `Ԡ7.Ē_X~_NޏZ[mET {CL$2'8v&Cl(vn~qd+[bZ6>+&(w9p$ Z ,Ŝ1|dmЂZőՖs.19.W`-ILI%(62F_W[׫$2$֋$gVJͨ~Mr>$g&tH)ꀁuDnR˔ VCQ,rh2UD`)@N$յnb쑰āmK]Ǯ8  Zw0.qvz-;=տaC1_08 op>W|>С \hxl=|#ic(!rHP 5*gQfi)Ai IV$Un\d=}FB 90P:w` ὴNyi$>9'sa$b8'2gR:MgDf٤I$6qDUE3[믜 1ɼ9m֔[Iiq9Vl\4m@T rZ&&XOʋ^JQsԳ"GY # /%/ɏs4cKK"OŦ/vW&^zu7؁d=)v|B_%yU[åZ=#ye_#ulP=ʓi&:ݼZ/&wl:8G)4:mShf'ʱJ|!AWT:dhm->%!_U?* >rPm`u$t7E2c۹>7}3~9=rNO{>L~TI+^p-ݳIx"v o&׻||I9GoK:G<Ѫ.N.&1??ݎ/׬AAyFp?ڟQF8SBu{cCTsFqFqJ`,"ieD >8D='~ ϐ+]8}#sM|#U0c F&Ѥ/DDz߃Ks 7"I繡XH]R;Pf!CQa*T+n7[;-u9BRYGՄi0&LZ&<%;#0w0B +5NaiF-㑮7VƝvѷ{9}5F]{t q.!zi qz8=Bڌ;Yh3.3y70Gu >(GvRTGW}ςOO'Poaٷ|#M|kPZt Lqdg(i $C rTo)db-_"D*MWN7)8}&yzc+zBz"x)S䔩yh.ꛇ.NM*8PTF cT'U7 c5G!Ct^| ۦxa7xλ?.=uǁAy H{lQ!:7L p|3L8XskPFrTڀc U 5uXkOs9 F31Vݸ޳ꋧ@aVzQOӗ W@kp8Ql<4(?P˃B0? ,ׯ_6?e$hnܖ/"|@L'K}0^NRZUNTep/QG`&Ot:߃8dJѺpk."-8+TPTp**FѬ W:r3r?LO9[ œcjjm'9r]:j]n%˛8$XRf2Lc20LΑEmU) #J0٧#bw?{~bgO5As凣cvdc# LbigNTL;Dt(7&&8Nhasٗ{}J>V١NV0Ѡ1G:ӜL V63DVkxHc<4/ө|vkS0D؇ g}xN!;6m_b=>R T+ƴ;٤G?FL9}fOfCuw?S5Bޯs+G>ݜ/)^I Pk+!ǧvs<Ֆ@ qq}U]5$A(ޝ$ 9hn V%gJ(sLZx?3N1ƛd_t 9o>,C"aŨ^6X)fmE"ÄO,W:׆\'e|QG~$?hL?;1ȏą))krH0[i)2C<k-h+閪?v9` yN ?͊1;_hv7փ=c[M]);/z>c% o7}›i8(CNgRiad)`\K WCӸ@q\JN\u4^nGfiJGحt$iBn[F.x?ʳr'z;]} TGx=›DV',<0Y΀ Ljc:XFZ S`C+K`@gk&Nc:dk:PKϊN~$CW{ SZcpE@JIY#PnLSfWsfs`8[4hF B${)]fEf|]f<%lxe݂q5P߾PZJ)oɁY`j2 5^t{r.WxC"h^LL++4cDғ@N`MyX :< F9Xj Lp4߿=L`zO )ʽ?؉{?\`#vjx=U1OUY~7.) 2oLROX"CA~B{4ӠXYQ` ޺Xwk&uwS,ɻQeӆ n2Dƌae:G ~[Rf4/mRR'.R$2F߿̾! [@ naIbWVN}nqh &E,モ tKʲWetv:*fݲ5[cّGco1g4:gcQo9> vkce^E͛: w)sd$f&bAڜH/HUl?vQ i%I8\]򃅕Sw=*VikF%n$}b-zI)ĺC;QڮnEu ]IyKg Fa_r'7oR&Dyx_U.n Vp%;0s_mz B3䜫 _,̻lh<׎k+ԕ$Fv> {Vg T%Em an}#nEQŸKE1|7\!0÷d,@O?=z[2ަ#Zx]px\cϟnGk֠r#w 1P molsݵGgs۸\qX.7M{]d#=:8Ϳ>:cqOA`q}W{՞'¤}䖌eږ?yJ~MțIc1b:ΗeQӣFƃ+D?mc)!M"ib8I- XK,)z8qr,-Q6)ܪ)l&wݝUS3K꽾F_sg軁UypcV6z`*8,u*05E58{ՂȈRlL$$"$֜PZ!$c B# ^s]./W{Nذ5Ւ(K]-.4@3b6J\;@mRTʷp+#)%sB=qB&vȼQW7Kfp};DJb>d9Ny=Y-Co0^(Eɨ;t+3:%Ѭ#楔Ѭ_^# <øbhYJaᣈj \0- +DAƖP62aNhcBxI@4XUВ*fka"8ŖXH1KB" AY\ưi-1АVfv&uX02* 3T`Ch)#ElX,bu $B1C8 F2\uMoRTSy,7ucQ T#1+t\˲++kpQ!-zqY?olY'sIM5j{w`!+F%D5$這 RK*3x;\5LiXRMq\# SCa-ѐgHG 1[ ; $ $u 1](u)0P! 4X sJR7Q̵R(a}+4d%N2Zl8G3[8Ǹ@JI%c"XS\ 'EjNP@Xe57GM00tmUJEHn3#gV,,m6;c Ĝ7ɢ# NS%M¡ Nw0riK [[M/ِƂٳV#;c 6 7Chԏ˫khV.22]2\b?eخ7rWnt5:|Pb.R9ƀr|zxf=.@`tTvK7ݍQ@(mVZ+1s5qφ3*{;éspuX֋GKq|-?8*&_/jͤ؆YofXrc-|ۿEo~A]a;)ԭA7Jo$Y< ?k?\\v@7)ma|1Imj.QZQX,k3fJ|n&֌G5:\DEn 3߯^)rUsouR5e"jo oa0qx-PZw* }[2 Y8UQVWr?q>KܖYI).bXuaX%nb^?E~:{I)xۅS^KS?vDdNG&\B^NU9bFQ J ȒˎԚrK6(#@'1DV FZ(ӈQ*8#k Hs&cj*BwBm[ ;2AL.?<^ j#}kr1 ⵍ߄$]Ư^0G"aD`+0ƱP"`s#h \2k iRHYHd"? Yb_( ] PM $y5e]{{B%0ſqT93R$aEwwGdwK\pUy;ִ;Er%VJ{Z 31efqGfRL-3UUf@4QW+f1CwZ|S/Y2+ܠDV_F%U7Q0IQk"B?PQ,s-o{],mhEH094 I%H$VF1‘7beKnpQQd,ҭn) 1D _{wp1 ǀ2I M. ~|wvru@ݮ.Ly-l9gbdGj@l"ѳubLpQl@`*@u S6ePȆm9'N`44#\r+Kwvډۨ>Y\n;;n!QT2_b78O{ ʜ5녗Q{4!=H¥IY>Yp7'E{őչ~ <OУwh2;H$AOЎv}-f":G(HG#v [kLÃ,|Pبg ;&En/o޶v.k%ΔɲUpM0n1$Ѹ2h (`"/篣YJMWmhb|( ߀`tFR`%&!q k ebCJM JE+y(mKiQ3K"~/W aQA ^2#XFWQ@}YaRD[a af bƐq{.Nlq&E@"VP+rn19 %1& CMnᵑ9ouFnBtpjmdF['1%ߍc[8FVGDu戴עbb ~R<5{.ԛVZӫ[oQ 6dVe/HP̛ =4|8Z+G`6feBo8݊s7(e,orB5fh0FKZ$Nt"Wkki*ί(vm A>ќkse˽6߷Gbs9Pmfu_4!<KZ(MntX-םw"XCtw&iT>ѹ.X ?1p59!j-5.Coc츮se:( e{3T8JqRplUc˴PX5^J|yա\:ޙ‰5ą>?/;. 7tPYoM SM7J!vBm o!rndj>QU_x/D[|k;@FYxi{$v bݬyU S3SRrǂ2*1S]*|T=36W*\rmgcpIwuL:[$&Z9Y-Wp35P#C`]9EICDt$Rp_sIAtea g!]fAy9Țrij݉2G5^ǹҹsO@!(D: m¸#K7?2 |v-o!Pׇ);|w|x<:w(nsw4a\εy~U##g>KFh=_ .|~xҿw36bt_|Wwӛ.~r'ôSgo_<9p_3y{1^t=z.mČ?<裏'} v&դ5y hjmc&r`&I3wj9|;2:}["pvF7Γ4~rtQx\Y.at&W 0B|گbЧ]9ro37_? Z̖K\޾\K FN͇ Nh0zt>2ޜ M3%0SЖ-l9 NC3i^$dMgYO{&tzz'I]d W͞uC@y]wcS3.rhB DnݳG9O]zRHf=2[ ]tF-%n!7n>@~= Ӣo(_ JM4HWaݶ%X s6>Z`K/4hB6r+bI"mxoo2=٫o>,2#߭N$>kѓLҐXUuX %oߎŋ w6Eҟ~y5ttoOӂ՗`0rc 9Ol{EW'2d-m`:Lo”5$hy ^4r0_~z t,Bg߼c TY uվv W & la nb*[,gLs 5XE`N{n ^f?]s9,LbfuTo4^y Yz:3GRRfNx$=t޴x*SǮ[/ ε(*̸v(vdM؛PpèƜM5#_ o2V$K2Hvb\eЂ;94ìlݻw\#7Y gB yLeiÆo;xݞ~&G}K$%G+xˆ3uaeֿZ>٢yLC6rQnѫ] B.?_!/sRocy6r& UJ vQ]1`dpċwi)񗯟򊣌ÕEY6r& RsEdJyᒊcs.*ekH'>Mϟ~$E2Қ/ӠN|WFGjތgr~;cU ]D#CVF)[8wYp{R\5)I={lhRHf"ݓ4F;=BDR(D{7f+L>'j 2zhÌU;3KF@dxǎܝݤc7=O{ P98~#I<3;X=h%!%:DX$(h7ffu7EV20B֗UYYy]68iQDbGmG{gbYeJ*zb=XV0cf!A'Ww%yCU폣9Z<]r4-shé-8G%㴯rϋb?F՘r\,\`ƣ` Pǥ>CPPv֎,h,BC,JW_za3L15h?7yk#pn !\]b+br'vHZ':^-b)&LN40cw @F>w PkrHu8 ;L9ŦqPq7czNm<;6(yb%丬uAp+QV#D8꘼*veB.q9[0`X<{,l|ǾnZiI !%R-ށHW@s7wGW3+J&[qa7EmvlXtV,_ɀMl1IOj.ńc6Ycǻ׃{I)Yc[kߩ~5h3GX(|tjKv 9)&`S[!3)b) L$;53<圳no!5r_WV -W5aHPi(S9(Ad DFZJ,E..0K2Ʊ.H  hf)D2X/3{f5om]) rjP/4S嵺jMR[5bf!6sX/ ,~2F љ܇aZd +GRuJp  24 B uU[0B[2@2%TṔ$Pm2½(Ox>feĆ<V)USY߿ n낞 &NA! ͽpH 8_3m)E, '6YDW[N ,=P6 $H^ìII)- 3ݛjFe-,Rۖ;R* e":J="RW"T|Ŋ(_zs `2^6Q' )#, 'O^ɧ{5ژ<(|ɧ|IbWfwz_$;޹U*++4:CJ!Kyd&332cjPrjPm?G` \/{Aȭ$3 ~/שLG:O\|(/[X"Y&1ΤJTО BeF=XTjư~5Z270iI<`{Yj2Y#%8p 45G{K$R-FQT4zi2@J bO)$Rkt%Y39 Y-V4.zO鋉3p.03%T#D& iʌw0E?pQUbOc+0Z; MpY*}"a%^L C4(#K/JJDPo4Sy'AM~ueץڐ OfnRf&& 7uvQ1^5l,<jɪ~f|sJN(}ݓg߇vpq>;Mbo?L./y~{+h$ٓדYY{I}E\cPָz?Wţeگ-.{|lwI[~7,Χ"ɿ83Xv]](ݫv^}g w6/^{ǯ<7/?g7ӛݲW;Úez%9k(߆%5籯c|rX);XdzcOƧ(|59>3y:<6鰣{_pGoÏ }@[}W/jwvg1m5u_}|ݏ]fRzJRO:RdAp6*/}0p$U#.RP{5{G#y ,w!Jjgd%v,@D8/>S؛f bl_9űRռPrP@ ?F0DGKUa䷢-''``CiD7,#ҤƙNp>^x R3aW>h19:NlxuRPnFL#ɨ@{*y[HǵJOpb::1::՜Wrr)ON~? $n/v4$|5f;UvD7& !1G:oEpfVID*!Bam87T8*C #&s3똗muLtĎцᴢLSۡ.\BMu8fh#^?8QBTm!9\h6H Tm!+% 89p-e\ mcj|L,7q Wl-OJ/n\\ojia5xaCar2p1H!XwWGf!*Ć A)l!G6cl3dp 0yRu>k&fCp[R齗\2Ӹk)>Xʾ[dCsI4r@A95%\9WھoD gJ Ih={ _`N.>7Lk=1PNq#B(-ӋX^KR{-ku!wye]( {~7 /mMc^.z2&ҕAW)8/9/ltY0M-&= u9Zmm~'[~wMNBxLf 8Uvrwz]]"IvGsS45ox֜W7X>YsA§>rc.a0?r^/ 6?IVC2FK8 R'7 ityFQWQ1'g| g~'v6xmKe>}7Iqo_+8a6px{3v6 ʏz]]@P"Y3ŋ{qeN8Z0o/v t0^܀r4ot5å֙܄MUBcn/"@ z7|:r3Њ}\h#Vq/7PPJmHQ*ͳCoaoi%ͥڌ)+JԷfrr'|$3ju_Pȓ+܂WJ˵vxA-Qv5Wi8r5M!-GdK'=-ab{bYF_stN q )oVvwY}ui  >Fz..❡LZ0bL^oN{( 5s3e;dq,3su< 1BlCT/QFD, yvZ@J"0 s(Ƴ7BTJ K0Xe E s+JF!C( Kdb`UHnSS+T瞗K~mrϢ8VXDp}<6bHм}FZb-*@65JT/WJA A% '.DCJ,۫oC k. yap?deC`QjQ)7c=vTʦXDU "/1Rύ !nE4~R6EM<@'F[X gDz+@ef ;{q@'~4oV&CnMcMy-&A#tc 40 $#ZEUe]y op*WL| 5^S @ZfJam'[L7nڵIr"7f\,!xLOSt)oV#} @!g=20#ꯦ7ٻn$WXzٝ%U~q4S[$ΪpȢLo㐒xŹ(bb[!Nt!.Uw[6M`Ż\j!ë-S.W:#=4u?O/-#,FMEz?,_%<4[,o21;L0/,]INXr,461W{)q{.,^7qv n?3i6t-[CX 0n{ѮrIsƸ//ߕ\7>d) OZ.睿sޗ퇒\EmDoj8mշ%!>в )t*|++SS9NT.Zbt%HjX>WP$:qo>4=nw8KLꩨv Wm;];hޫ"*Mv^2[y츋MαU[Immnrv[hTk:/muWuW|=c̕tѮ3P?UAwTLggzY',$Z,1jo}̕N!-[ t&+t Q N8zDŽΜV9>9k`fρ~vԂ>Չpg$˩ `WfuÚ02ףgoWݻˑ3N_"EXNFW|24Oot@ic^,WXu 49_V}[=v߯BIԜ(8f:~T*¡2P8[)cyVY娢$Z'{rD턣AQk$TlTët|jRȦ+]$ظZRcڲaĄy5#L0L En S`LXRm5~ c' .4@DQ\FwQ&SJ篾TM$X~W#XlR3KS.Km%`vz%n~l.JaH߅*ѢMQ4Y5)eCirƗnZ׍jD3}+MY(ק_/LƎ8fAcɱSnWSmW*?iS ڟJjmvWosK`]io`-Pg9?=F]9sD.IET ɇvw94܁Dْ71G7/k/v цt!{|hik&)ݲ{=mQhW[nj),?O%b)㼉uy)Z_ 7lw"H){#b;T](/o/˻c5/bQWUףU/fzj_S_#W{T)"(Jaـ< o zGGՏ3k >^t:0֥떧=RK2q ֺm>RSyۦ~gl8"ϧVJ"};0'^5I<DI7K>\U.Yo_R Wqŧ]&_3dU$rmcK J1|=TE$S9rhk*_1JL7aoiOo~|PpUX@RPAT<( /N.{9d<]9=[s)FBoߎ>VMn_:}^/(F:xNŨX<?=Zv5;3«^c@2͋5DhɛY>"X(=s&vIj$D[$_Ok&w wc? l5pGO'}RhTk!XVYR91rCN,mj!82HHhqlp{ &7gzƹ*#h/ΐ MoҗxdYlp5]a"%F-lXl72v/7EWe6ϲ|&Sh%Emʋ6ޱVYQMxC #ʋƾdCzB$=2D<ڷ`{~؂zCnЫqH%jlm$c֒[na|={d5.oeWLV}M4vQpֶ ѯ8ZCآ~+QsN2y:B6V9VXPJ ~8i6l~wBi`?Jv"\hayhQҮʶC߶ Gw6kc~If]s`ЪHg('Rzy]]VtbW#økQAS:38;Nt(xyU*`dH:;e4e{ JZQrŰ=q1j&49ZFy(a4AR2I-|t7"6&4GNe+¼FB.fdq >8)8a, UrWɍ^%7zUv|:c켖³hDB+ B@2iQm4Q(F\ŽPkUV(NcQ4ň-uòxh)^suKjٜBWVcs('m9#<*vbBS^ CJ39ܙqd3d)=L)miMp T * |E4u)o_<@8`TM \Œa ۱LC*6hX{Z$)B5&#yЈ1FFHpRc"$MF-76!Rڣ&$i>@$*@F,2 K"}(xhђα[~ 5& 1k4r-BLRv8 k-s0ʀ xQ{040cS-5LBߡ_NWC( M-ڔ'!S. sd`LJ(&',Í r?cr&0U}Z;0CJ~< % REog[xf<|\wNnQT0wT)4 ,]; S0CV{3K0pa5ݤZԯ]ᔧN%JhOCxH)0kٍޣQ 3H׽mJ^첸dR6D% J8tY'+IJOVOߟemܟ<}%J ^+YןqxZEC /zHڜju%AHKm~eCU{V^_jE"CCJb{3GrLm=zmл %FgG3P?t!(UH㬐Rk%K-6MK'E\8uY Z]Pb D k:kN`""f<: qYƙJ舩Aa#MNQ ޓXZT=a_]~}yVUEꍸE ϑCXv50ST/kɶ)s_hVu(ϑR9_)cu2WQ:&!4a:R^s'BM8x}WF {y!4)%uu~2b`2wS\3 rX߸֑QD@2:Y VJy-!`Rb&8z-oLK(z&8 E Z-F:xq\0pYjOdLb!Qh,N"'1ZI N$ ZPd`:eCHHo J` C FX'Ij[ĈrBCj+6J$q/$g0X |/7ָDtռMQJr_q/@)-&/ <hx\"7iAvnL}Ċ8q?h 7X7;.n'tUi\7Z1#013Z $@>J|(^tS2J$ hR 8R+?K$'XAo;e $ xT[+hμw ZQj'5H{(RÌBP&&p܉\#B5 ^Vʫ;]tt 8@H UJy:TXUCHP Dx*lߤeNWJ*͇*'$_V{j<)E6je ӷK%Bb+Gxw4 rsi2/=qV=S=$ԂhuvکLJSK IxҜK4xT9m"XD"1Uԓ;F@F`-CHcce&Zx kT1̭DS q} c՜ ^ 贷JXt)&pka #YO J!J&²lb2w MrQ) R+9#+{Cap:wF?eiI);R%pf(*B1UuuUwuU/.j1Xfkv(Gj~+,ൿ_B4(F#$,s1&9 *B8EoЈr,[zBNke J>r DMD&ꉲ'Jt[AAF2$vLH:>˅) ӺY'Tj`j7RXT%h JkDȫӎϔEEd%ֺNKM1ʂ8ET Aj@aTz €P=)w]Vj>UFVI[9, =~q&)o~=sB1$3jbؗ_;q!vlB3JGR~vlڛ  w/Ȯr4κ0LwdߒmuZ?!I Q%*ЧUA0AqI ;Plg3(In/v%ßMWKN^ɝ_JnU$b\3t]} vgi*%#&tr'6c@ Nx0}_9Y^d#&>p%.%}s2FDRMQn.B&r+BNשּׂG%:^r~U^Pc)|<$eT-Τ SgZs[l&clFDs !J7ɣ]Z )MeBK=Zb Lk:9rʉ1݋+8.bs[=~AA4fծB㜀HG풱R4і3V:۫-+-!DߏZNm}I(+*W.VݢA rߗtlhXiGh_sYASE5z] Y~Gg#J ?1km؃]MO-ZyUlEhxS]՘Е}5f;]oӋJո[U7?}pLSJ4}֑MV1.?up-{V2SF hlQ!{#l c Xho7>!|o5s<C6}k:rׅ9.e(19AAګoM5e\r9Xx@iث t(?]\ٷ*\ޞiU^r ҈_ځv5#qV|ZaG %[)Oܛ?fΪ>By[ >0MG Vn]֋J4+;J /wc壄wWc>Lb= @S6#Upzކu#5ijtUa[3Ӻ}41~Y[-x7_|L)XDC%qhaAbQ#n ZA]:e @\j"vm vF: +2 MDԉwBEiX!U R9EO&ƻ᥈ڷVG1wZ61#o㋳2+8+zx::|f9`r>\RI뭙\z\z{O%ĆTc pvoS3Oj:-.d6b9!g4K.du8eˡc!O$W토ysT0EL7H;8F)kOz|xf̞ Zƒ6172I1SN @:ψ!y4b%M+#PՅja1О~:*nFk&e[7~߿oߌrr9V߂x~%mx܁`[XY6Δ MK3ƣfsnԮl+Jf g?cxJ"k9SXʼ2(cƎ7>s}ӿף]R qg}7 W}W qj(}?zY;,ʃ8Ύ&#'˴:f}Q1qp y}T]giW3Vۂ^YC>=i\Ee=u&86,F塭[gFK[Wu Xӗ =_ Oy1m*`q~-h"2q-o޾aiJníMY%Ls؎ RS+3,jXruKM8Z5kyQljw?K;^^\}WgcaHS$-2Y- I&ͦ< E04q'a4ZpBb ֔Ab=y%:\Ra5n;Yd6& $p˜>YH,ʅd+ꓐ9\^qϸd.?}+ *<ZDRU1s2$, lV#{%^%`UQl!]h<]S%HCgX['$[=AĞ9ı  u{+Ibc-{ݠ)s6} P*Vz㥈D][}XB\Ӯ֊ib h d(O]e#͇]k3ziXC]Q?m+<-1N{kX<SZ Ɵ+ftz23Z"oOU;q[=\'-2Qw[~ô3(&w7ۛŃ= 7aZt-|JyQT *ߕL 7;S$@T&YGQQ^ye6FQ2 nnB6\yDxzhP _5ϴ ް7?p|b|`= VdEh L mtP&˫gWZ_#B^_Lt+ﮮL2 @I*G_PcMHq6jgA0(7j(+: A%+)~0𭂫|i>Mon5jC".aC|u)oq-{bnB'}s5 u@fQ?[8È{J?c#83OoyeˣyQ ?N QUT.0`fdmB}k{S7c.հ7kقF>T4^Lo?/]DG"nfu!Z3tzGU@P<|Pl0ݟeBW)Q!hP(5 %e2 @3BB-<P Be_pR "ӋJ6K wd Np~ 0\16F|x`^ޙ6$I!IĒ5Q0̗ƀ3IGMBuvV0FeY ϡ/% m$Tכko4#l dhؖ5.D%=Z[Ab4#i.[=;RK1SfݻR-.n`P*x^aL^vD#l3Dmj*|G惋)$ EIL?$E+ yVZ˧/ayVXM{,}=2B~]R-H|lcB!0f 'Y|pALtȬQV; G[F*U262̛Z=dBM#[[ ̀˻Jm8.KA4nt%Lp9cCL"%LT6@Sd}Чyݧ980# 0 WZrqW$ w/G|YF~n*3n~tJ YWtcFE '59˂ H膑wIJ8TTtJZx IH#2s2Ә ፲2oP iNJz6DiXH!S4RPP4I iGǗX>ꤨȤeit;P(=[=tAD/2#a"#+L@LilpYR c"{b-DQL6- 9+:~@pՄ$OTSfbKC$vpvyIӄ2.j@!4+0 z<"ٻ6$W}Y կ倃],v1KFJL(J&)eCU8H3CI $aOSա^FT\q(r@9%3ʥ(jeV\~XJ1&^,2}8g kg%NS\"$a(eVQ0{8au8@FDzeA $,i_"Lnjǡ<ϹG. [P6$4̆ehA8L'$&LACVֶ!h~Me ݂R9ᢥv(,!eв3 .BKu';͉gGxt>0JCL(% b#VfG~ N2P:Do5=L0QL.!~w Q܈PumoSFS2Ždl+]abqdJ{ 'h5FCd F`jz\`8R1MTŝ<sɸ=[%#t1,; $*n%kygHxO`''dThAr{G@*C?%gd.&/uiUZNq*Q"!'0}]Ll ۖ󴲔 |`8 딌YNt]LJ Q]RSeyHD8ܳZ2®ɳɳ\[AAqc_'a׀p1^SB2#znD\< ]A  ]һTiӜD65$oܑ9P[kA 6yBԈB,E@b׻6@u7`-n_hCNԋA['bpiևȨqZ-tչ>>*G}s$Q&pf|4"B3~/[=u:QGv|]掲oI&Sp!{>|[(8 Feژ~lJKi.{:{]Vr%r˰NU2UKsvXsJ^^Y1hKx򲙢 9)b2HqL)rFr׮;y&!_xzW&%ʿ4GfZg)}?y.9V7A,ބR g^s-<AFEŅ7 *xnjz5yJ/^Q,IF%ʪxr!4T["!ZI E#28mG_K"#D#F;΃.|`*E80v)`tH/@mSS]47jp}@/i‘"@0r{ms  f]>gQP62$6Pb/CPј$4xkuDD ~ \$8-eɐ0DD~_Z)EP! fK6JCB[r 3D⿜R[%09*Bk@0 EjwLkC{-6@t4Ѐ P4uMrܱ/M`BU@G6PRp#,B0,<"dn̘2zngpj/h8oF:=-?s\sx/o_9qpr*Pc3ɀ6-,dz͈' [{޻&PB]1Pǽ3Pr;`"K+HA0i*[% */Pϥ#KڠTPn:ućhyω-gΰn9 'Zh=t"HQ)kvt$Y! R)s䀹˹'&%Nȣ^(}|h#[~;-A3X[=GH-wC(P T_@"D#gQ lko59i2pf]L#mhD`*:x6R)h9LZ`Mʽׂ\=ԮbSB+Mq٨l1)2&pI*;t㠙06 W%ʹ*׽;kiڼ5<8k!佱>*)j>P{'U+E|@>G,@3L7C./Yګ**ī Fuѕ&LъgZc F4VE̝d4PRc@IJE7W9 sH{ˊ^*4A0x žz$+}UqDTAohKW+-{VJU|*"w9C!Co%P.Mn?: kXz'wav2wGv2c*rTsZL};Ggc{jczNܽwr9~!({oIostN=aqwboZڣ,+ZFI>DW&oWq\Ro 5 d7oZ/'~1oXE*闗(M`7[PC_ڬ8QYGJX;i^Nj˛ans{b- ܈gkNMx/E2P޾CD)FgLK S85tpt$/e7v,ݒl1-1K L U"KwO7ta bps[gr$c6i~d6='rs50h ]pۀh|:~vފl.Wȳ;IAp/{UZ]T J Bd>乎Dœ3)T)ZrJ嶍CǔzXڷ%'Zq_o2/OEy;N+idoL_o/,SXj#~իs31>UюYGZKeCg.j&);e]:{~^`FK SC K tz L 6R;ZPbrnE55Nv0tm,NsZg4Z2iJEe)M\[#6s=K:phIL{j* %LYJT(JYqF@0).X:OZ[4:ڂj LQm YH <$a UƦBc KilA&dOHIDH5 AZ@~%E0oYBkto|K;oxy0S{/>T>wfXfUbjU~cw3.(n-Vo-~heQJò& H)n,>(o'?e6b@u+K_zdr8$g hҢ@o }Jav"K$7)w(c%óM|_liugp\,|)@6bWJfK(y$UQgPmZmSwvx]LzPʥ3B 2.Y'EQ"b d!!q| ػyT7;; ? 2 WYץ"C.kwT"K\vɉj#ZxjBTCNȺhQ(4 +Ĥ{/K JeKADXuM*ڌhsJ%rJg.P9nvXEu6k ]1d I$#B@T2/V*Xy Q8yɹ [`͂dX< m& !ڒ#Drձ )v-@!Ԭ%r MQAio r-+[Μs [rH5f+R[ͭ/?N׉ZTÝ~S ȫӵ׿4FL:l|9rUv_*`^-u y!ؐ"ESـ8_ؘ\b&|9`+hX.M]C苆:|{2p't.(deOV3!8;05g2"#.8;(ߥZ6O\iYZ.sڮt]܈FdJt%\ͧŷ/_.krt8_4hw#t? c7 }weEu;n1jr~xct6nX[ M\W7&nkseq .=qJznZ)mW\Dd[{Qb#:ݞςzP{nMI;Lmd.y_ՇF1wnEnnkHW.e ;jgjcGFo$zҎzZMBJgUJ0s*;rPh8&u|{=F=*Ky?-'@#Ȓw RRíM2j^fAwv^S6W^ZAm%ՂP_%]37W~ŇW/_jG+L/=߅ËȮ/?iu>7Ԍi-vd'ja +Ac6L= Iv(8\ ["B%kR__O_25s䟏Kh싧g^6_(ޙseqWll]q/߽|;z{f Jɏ낾s %7P2N͑r,=Dtk˪/Gg|{#{w`6s? -8dȑ!v@B+N:/jpy}uT{#-6Q;O\MS(;&c97?.(qX'${4/[m&sZ;rsj oaJHL}eܖ6&flwo]uAT0@CXc4[bBA vJ$s6t+(tp{ADaΏP kf3N ͜y3\M룍6Y9)Rp!S DmsΜ9; -֠uԩ],K/`d.2+sL*h]d R˘Td.Y4pW,ف^|jT #7AO;= M ԎƗ!qV|D!80a=Z&@X޵>MgmDtVx g] '_D?gBO'i-ב1Rr4Fji#.ZqaOjBss'/ݾK=NLlȷ(ރw`|. B!eqig&Q :SDt!R1fp-6Q4yۙ^aX* 1U)Gv /Fxq‹Ak^NB[sfwpnN? -8`ȭHy<8߷M]v=9da|w{d8?p(ҙea l[L,4u{+  }@P”!cZjRҊ I L6S ;.l;!mqI{5%y:.vnig εf!Y㱀JΑSbl6k5GQ/ը@Sg(%nHnaD UJS%Uw' R4:MH"Cz9"-=ؙt{ۿK5(HG^|I_ % ҋ.MV KH)Z)D-9gҫȗ2ptv8SСN>&<"1–k7y}U~X{IΒ3Avj{IUK6nITn|xr5_Wa>oVЄ/29W1%}\OQ4JTY["FmVųޢVmJbeub߳1ި]O5 sƣ񰚺o)a4mx13N8-`p#pJ-։kh]~eB`<@nEu!TtڸA zq).X+EfVij9#9lzUiJ/P:W ZI"m_6莮7#oĿZxCNX#A<伫>X/5g3PMhcXGvrLUjdB g=rV˩&I/A [g[keBuF˙] aP!gTb[Xkz9լ zZFb(k+>G#r.acp:G[ɕj՞ ZVDbJY cd C YY0E:VܩCsG-[ɬX0r|qLjϷdiIke"[ ?6BeiGbdW&r]ٯ^B=d ekSIJ16S?͚0~sR\TZa,JH7A5as @X:EbCnEy=iAr@{)F_~yjO@v':G[>F/F&l&7ǟZ)fvB5U;غv* n6 F O $]=_j̍5+0aE4cԱ*m1Vn=ې b6C뤼IOaɃrk8Xj&M2bqPA=#2yX_c %Spr ד#l|_ Xx*1V`q9|ŠQ UR :lړ%6CjG.btu2HQalZ&!TgE+ jT]n}([ҩwN%7+q&[y)qΥj>PK- j"҅XD6ʁ b;bu,XQ,K:  C֤ G^P|.T &:Jɻʚ8˩nP,=u #T(FdR5"9exBbuM4&wmgIB&5rkA>Xb:8$b@?XU8nмzxbka4cl 7G.$k Bpr=zYk˃Wy!P,;wh]p>u+9 w`9(3'X յ ~*>DSEՋ=3=zFڨHdsww O~]AWW\t_ -8`EϚk[bwˁ@-vE*fq;/R=[YxvXu5ݕOl[9wLj7wF"7aͩ|3%Z "`/-Wfg0ԼbCۢF܎1+&8ڱnYɈ1/-b憤6ܤxwҾ7G)g749PN?vN"d­S'>/?I4§ӳfpOubO)7vP/'}?'B??{׶Ǎd%H(΄΋ hdK3Mtv8T̃Ky2cyk3yHN"2\>T~>tww겥6< [ \%I^߶)!6IɞUx 03&$\\qj%}eY#nI!ɞUcBRXS CVCm~Znsary :ͰQb ,>Q@&*<6LN5qf vh,9SP!Qg gʨ(vy2*$gRxAq_.2a 2jh7F97rDlDx^D߷uՃ2q\ثP٫"O&!mk؛?چAhNje> C3 ~ y_iۇb L:DR.=i]AܘB0\S(\nϛ_S" +-87y*˝ 68ԯyԯY,oTrxFݒ"by("?w! aL&Yp _JSm Si"L{ũ&qC8UL_Pv7۳|u8 'L;9Zxi@QZcj'R!Z<94krFx !"e\naP+$_?M.1^q7s)ɒMNHl|Ty$DԎwh|U͖ Pqvx.__ߥ/mRۺ MOe\sw@L\8Xؙ8cUX5x?2q_gPUw}z}{zҜ-p}3Ǩ o~/ Í"'n5Duj!Pxu_rYd>Sa>={jmKRggwg@>``^"p< Uq[ mCӌpDI0>㙓S"q>V*lTlG->lT ,#_WAʠփa&7ll*UM!j%V:|U3wE-zgX4sްKyT>63_㤼kduIɒ\ 'ʈ4hc)(csI-Mex"2%aÆx&ȞiId$)%MB_6*l<:B|tWtAWHVWQWlV+5`{} ;Ud\ogG6֐8MX[]>w6rĉ L >:. 8ÞRccU-Scx E;XĿJ首pQsM6,AN`m 72. ày"1}R0 g d.<vȁaMgO::u.B>oj)u椆 D}{8,a &Z lLԍ!1WqNNFp}8*30GsHϥ$FOӂh~{܎LNI5MQ匾 ȱ}km*.dBe%EOaNRV2'wmt noW*6G(n?nOͷCZy36MJKr4'x]`ϖf@N# p{vk8ݏ|J&B8R]~qNZ#yY96هn7Azv%[k1Dsa=2d˞pnkcF(D_6-~ғa'iN8MaXN ]ZF +/{oP0Mj-GoWf2ĝLwE o:ҏ )3ajH`H% 4dQճsq- 06&\YQC-DYF{g0L[CҺg`ct#Lkt#TdՆpG-+\9Mfv޻ZCF31RGO-*Q?ˈz/0 vO)RYz7=B. f}: `bzub yHbyˎ(g[*6S g&yDb"d+HG=)l@K]TAuEDE(qW;wa9leo.Y1&|bP3*Ƽ$,l2/>E D{\df(z\v `޹!*ǙT0gH)5&I)D~i`FGf*!/:@М1Cbjk"4o B=Z5h&6#ŹhbpIDĶy} 6HxnFEȽx *+ V MI;a 3VM)2~FO]I.4 ]"(Rj`2U"GF-ӂi8q bNkޭ/UB$Jm2G&-L45OH ^vp696c(V@ܯ2GUj%Ú_x'<`{`fXv,M\w=io]INFgvuR[.Wz8bp%HDHJy_1?ƭ,)1pbpXN]1Xl@Y%XqPP|ӞRΠt /}2+ֳ7%3i`(g~alh/%UU\~r~D׭,S*z[ a'4i{%w.|cUeh"R?ߏwi$K#]#yw{~#gAk57:Z/B҂ *xj42X'K|bɊޱ9 ЛUc`0__7OfGs?#I!@R{ ǡ霵.^$IEkO\zkKwS~.1UZuH}b* !8eP?;oA+Ÿ: M@)&9>1b M#'o)O'"%́DnO ,8_11ﶗJl9a!Y IMu/Yb(XK1J$٦Ԡ9=J!G(5:j`Mt1g4Q۔:ge- @9x>,P.3f5_}̚o);G)3GCY$q_D]%<1j6}l3d8bc3=MgEԞ/IDzӹoUZ L_5+?8U`fTz\&;Iv2y7"LJK& ˃~P$sƫWdUPŵ*'kj7EٸsQ#ִ}/*˳ͺPF=,Lm qR0%gRzpIWCɣ3%FJ%\|p0Ah.ǭJ͐M\|\3Dp(ް,\ ѿ,٦(:̓D^4J.,.0*0^ؼ9+oNY2G,įm^ak y43um3ܫ@̨DZh/iWׯԮq䵰\$8m%p׌< 8˜[T TB<)˥Zz[4_heHSbRGo єL*6O LDC^X+W0B@fzH9Ey+hd'k՛5֗WiᨌB OXsSX\Dr}y|2q֗D6JAsmv=n{9A">\9%ИI碂5W)S.Cf윭rypIc0 }ra|pyy@=y+|2y<w#ykŀu b渄{5.*xv8ȳB)ġ^m-21 Q8Cz 2S^HNa:aN! 9ǚ;Rv"'U,S cT7Fu яd MQ {|=e[Cl(x6|r@卮F0@&ΣmSć4)XL*Uƫ<14N֦q’MPh%r]C5`63#{-֮  V|Vf+XFF)A"KRmJS{xy Vch80`^)q{Bs8tT蚢Μ<`DFi]1č%LPs*0Mo4 aQFӣ&çJRhz %D b(yT ,Fў}N] :D']mX("AI ǫ^,?w?E5 FB- 5Qnڛg] H-FFǫe\CbV,15p[=yp?s;5 ++ny8ZʺwmmrjW8ڭTm֮8:bBZ>g0Ė=5zzzyxs\[unu>6YpY-9_Wç=6mjvL7a9m9mї Y+ۓSny|tźߥNwiu~O>qŧ\QݣzuQOn R_ܾs {+N+VX5fN'fPL 1:7XZcBiϝin,xo#)zjI"};V]蕜qT+Et"ǫ;5Cuբ6l!FegյG~d|`s Yn0ގ5_~NӬjnƽ?6ś`ߍ"_4}SK7<_h f0 v~i%Tm/=YN^1`/6v|U yW.dJwQ- G֔6.ڭ? 0ńwݚgLZj7ELqXmHDڭ)lD3\[lE.͇rݒ):"Zy5: >{"-TMi x]*̅Qzylth)m G&wn^\>ċ$\?ҸGgo-ZQ4F\iǥXU%y3mϖ qBi!xl,=7  F&(ɣ"Et,"hٙ+"ԙ tBE`F 8 ƚpBBHTM)Rr1T ZKjKK1Iު]wF=wcx)=}7cξ2Kk JɼX/f⌔w,V\J1k&i{ i] Z/ֳW Y`;s`lumwE,oD$8jQO Ǿ9Lj9zWt䚄;#q2ϙ\N0lG0ܘ9aLr7/hߴq䀥1Iľܸ1z*Tm?F?''rˡxuG3:/ K0|jII|f`D7jdfۑc^mKH'Izw}UW ѣ3*@#-o6M4la:k0d(jϯo۪|F>1Y.'geEcOK %>!qtVq}maVrfqTFHv&v=6Z4$gz#C9ۃ0岟rV kg5X CŊ)+[ 0Pߙ7}c_cy0J9GOd_\^dp38ϖdGֻnS] shWJa>s`xOWQ>{hD/E ^3쳹iV_oiUDA}V6I- ~:٢(yqYsG)(cBY8N#]l(A5‰5rstg76h6b c@ o; Z gl0mQ}C8mc-„tA{)[16plՉilfzj-k\AJSJ8r?WЄD+$;p[9غWx, Sguajun_Y]l[Z0QA.*ŋpM -*PPhTFm;kb4 L[#$W٪*^ܩWc5%}{x(EWdq8:ýJx{*@]lfnfehFYWÜ@ ./Wܒx-޸z)qh/X(U U+tSF|#Z,'z5F:7)Bz,ljs"Ɩ$Q%2b(I#%yqA3l_=W֜6ܛם.Ԩ U5}\ʬ@KC֯;j[5 {@X`ʝ르z;I"YT>Yθ ҟW7@Ke|Uw_3K6}`KZ/;ݳ.^Wؙm*2GjQEq<{[o;Śxu`O I/ڎV_ANE\SgSIgwCn*lgL9dpȘXSD pBd$Ir7]Ռyw;ei܋VPx낕"NS2vT_H~.{V#xM3|ZL lԒR2Xǀ J]% >rJ%;&W/1liN *j*ک4d+؁m3`1cBֺy) ԓ1CLS1 7sljFӜ' , pBbD0YSMu9 S1X );id.Le25%ARps*((f 28R*PK$H0J0X:s|` 0kn tKc%]WNmln3O狻t__8&)"G #bQh30kexu!^\N'їf6w\?{[lXRՊJJ$- RQ;'Ϸ <`vW5އ"5iO_ds jЊ?Et$>[%`.L=Ha +u%HsK{|W}ZA湇9ՌNBl܇5$I{GkĢc3"<8HJթ?f7&FHLW:gS/NQzQϲ2(Ycwǣgm\>L?Ofwod{L0mFE𗻽=|)f% O =V${.Y?%K-%f{dlM]X,ŧb}&K#W,F;G^}p2"L;?()Ob.4XS:,Ѡ͍lA#OqBԒͣQq-U74_ÊTSRA5)D!ӀB-q*UL>Kn5{l W|`\elͺ7&(mڻFx3 sVq>~qؠ>SN _\^݊]lr :M3Rty@v6"eP(Lvu*ύ4vDLLPn$*,l8uK﷍ĦV[nZ]BbWP)؄%75CjoMOoV6bCŎH4BX!)tKb©$=^wywmͲ&bB| M':ܷF)Np|{HR3uyx5\4o/PWmsyվ䭃 e-r!M1/H `% l@Isp 70$r+r=$XxtTPZp:|i~a/=(J`᩶t!e)Gqi/O6{ImX:cTBxFD II,UYN3Ǥ3D.x$Pë6uJRJTKDbJq2!BRIʔ*3 sͱV'FnhdDr,}9ޟ&_aCۗ4G&C:eӃi*|ϧ~xir!$r ZvLfeCmA2VmbP_#͋01]{pGcr(AX) d6HU{j#\] e;VӞxKG#E@0|KhMj5vj'&HF3PXIitX2>7RUf 3r,PP%ɥLFF",!AW  !ͶԜloS3^5)7өq?M3`x212^_,؛EipͿ~oֺ2ꇢIjcQ = VVm{j9X_]Egkr4TCnMj6CVOm\:TZ1JI}JjU T/򂦙% P9.RKߠs*y$in,-2ȷfq #F1cRJIe[7rSsJMf[jVe+,;Z) +l^8+(j^JfXUf̦75jsڼ s#ύ(BNn#DQz P\"BP?}vFJ'SzL 1RM cqc+"=mHN{,O/b%î,bBsla7G:`Nd@7t}>lHXNiHN96)x]V}S:9bFVbrGT96x1jTk;,C$#mK`r/j@6bSS0J%}Jեb2(e5C XuǼq+v|Lv7z k8rAxW#ȆܕvMUocX{ɕo4w # Ko,60xjEO18jOx[c%uRŢTWlIIM% ĬAd{ G;kRY"J%PAPJ24!DjZlqx<Gj/d*d~uBb@'k%Ni.m阓_g>'w},;dmݷ8m5+xӻƜ>;bm|^֬A9Ze@@BQ!G=g,0",1XO6WeTtDޯ:B4`N1,QL,dnlF1Ř[_  q@95-s|uK(0yIl;Bmֵm/ƍ>7D %7f澋;1tgTDuYᜲԴ'ʨd %Jim~ɹdY'UnoX\BQЄfBRS'*K2Sam~@.0$m&5 Q빻 ea-i7);;iwyvIڽt]6i7#hTrlS*IK4FI!ȫRv{v/ѥbN=HW&c[צ@@IͷA2KGDA3OmmQficB%4 +'ͶJJܬZ`* +'ͶԌU*)VzVJQ"P:[M5;)[T |;+-֖L8*V*)@E5N5u?zU];gm6N]$1Y!.Nb)twXAK.YK?_? \qִ>fP-+tT9BE@o`P-Zi/qB[=o;S{פs=5B8x:w{a0GRp&DiM c0kxϢ )ytzA?3z|lm 81yKG41$p| m0.QX5bC'I}kͣ J$(%2CR:GJ!@g",D4I)> zw"Bh6R:C5#g He !rŅ4f(yO*L)!|(2>O+֌YU`v`BI$I%͙s_\mDž_GM߱.se@y4ӂ8\ Kzd|f2Gf.qzǔ_⤬% y&Ȧ~nU#z:p1n,Mޭy6wkB޹6)ڻqbnMub:MNnm8z6,䝛MaGG+K~[spk=Otr[lwQg"$^_\>'Su|Opډ\{رdrlbӅC핸\OHimO #nZ|7~6pL|Q/n%&bJ %/96qr7gW/9Fʩ.f;|,cu%sڡz1M'fږ$冄f DYH* ӷO֛mYW['{T~K/l@Ҟ}F7w^v`HV[Jrz%K\*XWm%G"}OYnnE`YհTlCCI~\]fQR(#%K =K^=g' Wz'Αapw9?P1xzc(?YMDv{5Nq 9Qn4I(s&QB13ŒmrXi&6+I٪D|:M"o Og5A0SMg >2TcNȏxCɃ5Tl9 ?+8 eM 7ِAx;\DvWpR%nbL%n9Ҁ4FS(q6Ȁ+<$gp,¦" ʮPw feZCYXXDJ 69"1(#Qc(+7 Bc g- vXKs 2ľ¿Fl<: ~[?u38y8&zu~.x >痥?+uSEvSއ%1l9A{eYXӥx5YjCoHRA1v+zGe@1/2ʨxTJ۶Er#w1!̫SI2"x*2QR cD%1&Z g P@Q!T${pݗ9J"4~7? f*f\P#Tga,=u!Qe5cA߇H2. H,S؞)JcBʵtR HIi.w@n82VDRA!H0;D@qzm #B' 5[nkU6^e9:W\ v?bxyaV?í7ҐRҽT SHbcTHyj0nZgΊz<آ,ߪjyoKXG'{\=ݻ(FK+00-Y{5q'Gs6V@9d5a7!55fYPx@m MQ0^Wh\ Ϭ(nr%*֞1IS brD4mJ-?SRجuƤƤ "H%#H 8 ۨlA R5MԂ|pm"SP(=CYy[S bDiUvSKA=:R!!\DHQHA{՗3X(U.p-~K8/ -g<;I:0 2< 52VY}$/x"OuO^r|W/Xs O7lxW'`Qo~ru񔭮X7M4ʂ.2,J ,T(byC6?Gz?4R@w i p\ЈBj͎%:7-gvPm.DV܅*ƍq.TN[u!z[/QH,+X5BTwNhhP*VᰳuV3&2v䧣KVIPVATJt3P"NfPcWfa I)NH" p9q>zsD^!t "F@6Mt9ߍV1J(rH;HN=_*~N*̓gSMvHm,>;]M̓BH1{[SE0""0 KzL%8fWt}V^ACVjv?-uxۆ[lQH7ŠtF_qP$ lvtmxTöOxf'S|{;'MpIkG*zfu(>:y t YH8[#0vёSBFe&u@삺˖C҈T.F1FI^.L2<4)۞Is.\aGZ1FXH4x#gL"U30W}SuџP:ކB։4ʧC^* gNntgsy"wAn+ f>yJH9GSF !Ne9`B#-V!b`r!Q5cf0r6n݌nC v݉ ݌%-C킇A.wb۸|Hc$BKRQKy#tZLh$Bxʠ( Or-cbF1Vd&Ŏ+DF̀2\)ǑQZd 'Վ !\+_ʫT/_Wa8>*4biϏ]a`(!@y+aj9kxq_usR ύ1Xr'/N$s曙d._riB9¼H M+ς!P}JBXӑĠ0ùP9θ@#xx=8id>^IJN~9kMtJ_&q +tJ-JIҋ_'+7cZ)7ϧ臭$D1xϤoL&|ws@"љمioʕ󲜵 xYZ,v,.-z"}wxpauEӵEn%90h+ŻUҮ+R6 Sn?.NꯋSF=|8v TnFFeeP=. n=;m7+Z3)r:/]I/?;ѼvUF%GCD֡ҍu`͑ I/ ۧEX `Lm`R@>8ol?oMVp> x$.H]Qt (Q[mS) RaA4e:jĈ6N״F6^[X{厊B. Ӓ !0aXpRK &`m~Q8ZCK>U㊾iJ^<v8CB~(@+<@ a!1Z0!RHUp2DY!ˋ/:VgJU<*Y5U8*C [R=H6$\]*۞qz 9v&nR}[S״uh5D%ΐdjز΃,{9~ TW11q:8&U7Z (`*z QeX|wꊍ7NV(eCbkw= hC20T oH{mo"cU)*>ncөSj=`םgaŜ֏+A4L  er6#/50cE,yn1ћn O~Oݫv^)$󇹼eSG?3mkaɗ(ěя?oޟ,ھZ}fvBf@vרT 5 iPDJC<Ն݉ugugbUԔ`\L!R =△ihIEq VZbx"K4^BBz!g⼽ųEf]WŠ A ^^z9Z|"BS~?R f$rP!$'Aʾ66X8igoZU.ǀ-oOkSD[b'8vN(GwmHfw$xM.\Mfj33*EljlS%mcDyvfrUh!eoZV*7Jwulu^yr-_8c)Z՛Eg;T_5:Օ8A(}PR}[KOy("$m/_g>>u$0JhĄ))M Mx(F34V3 }@y¨&!$ ! ,AҸl"ỤE*($ \Mhif{%8Yjac4 jD=N(LV2Fb$BP@NH&$PBLLHL! 7q Fv`~A4%j46X sÝ % 2F5[R+#E\ć g2@ \NA$WQ"l#4wAI7O`ˮ#Z(SHPaUw*-Sj)c-]{n|ZA#i>FNn#T"Ea´ԈH+FG oG9Jwx.1A>~![ Qw=;bt ZasX=b=S-ZثsU \5p8mdnP` Ž8,S,5_w;XCxKmTPa1zGV"6VR+%'u3RkSoˈ`LWsy)\͎JZ8Ԛ(9x]d.KuSG l^'h#xpf˫"a;źNW? b,TO1p ?̲;'HѴ oY.j=8]%ڛ>"OI$ofʴƬsTfh-i+W&:@ ~ޱn [S |TiU/i.ؼk 3C7rqv~*Cλiοa"Ϟ ~0d`gpO/yB8Az,#~\WO.}k {ݚM#FM1hԵuu6;Fv"clv6ޮJu43h)~\TM<6&}i^Ink2Wd{'\VTcTKCigAC\>2 ޖ Gln;'m{PP]?.Ґ}53tJBKsNf;CkA؋Đk3c 9%P 1$@@sCD`׆kB8DdZ0QN7]n*T7 ^G٧wNpXUjcʳ L5(#^ڠkENhW1 %(=Jrz6"%.-:wlp*ZěsnWMs)vg9+a剔`HZI Ya 5 .s˻df:obnee\ܻDٲO_`!]e7E?u",3n5 g&]./3u/Dvњ쑙,n^5Vmoߦ, 1Vm~/Tup*sW*JneǧFbK*NX`LVڑ^:Sxiof9M5r>:DR2Q1(d8%`uҹXh95O0UAF{ًm21Թ;^,n1 X2>p6iV1]5tҒ;nfBB^Ti5ѪؐUJOm'8DagD[o-TfCd2N;"pJZuWN@'4g%z V {nSuQ~ i x@?rއi Vf(Ϧv`HQ3c?_wigdTd.M_ƶs* ۋQ2]z~ͦw7eؖpN]ۣ۫?S΄k\}17"4@)+҂ #_qwg:V=k>9S؄I `pv1uj$˂O\;rs)1RbA,w\JRb^gtkBK;ݦWG66xq[+5D[]T7sZ=J'T$X-Fr*p&mX ` 7k Gc;bkέr^w\r3~92@ߏNg3;ky8ӼGq4Ɗ*эu63e%2QbAc:s+A?\ (|:_Z+,hkKS:ZY㰙>O b/RwQ M| Vo락OXsI2dwNMc^+ӈt<,~q=$Z*Xm4!Y+g{;r4p *@E:v&K#5<&fAÈIBIhW#hDLĨ6i4P+I[R:m=%N*-mi5eE깿=vw5N}Z܆$Rn WD>P;le#)e!mбsݍˀDZDKkG+f j$|Te/tg' ᧻-w̋=m^cR/dpuM]n×E_Ni9<|U@eu|E1^繺'(\{FNr{ERr[7vQn%0 @L 2a.2.'z@"C 0t'\`vmwʶ7ڽ;7PÝXS1 b )%Ahqݲn057TḀ&IѠ.F1Q`=e2`""2Q L&q A,"B)RE1Ȉ8IFyBa!NfԚv|QPx Fm5 P>sCR_HrsxgR5!%{!sJ`7qd 3NLB@yL9 &vC 1p;1$ 0 .EWUM]fMziķ\PkY!ۮ*|ΞD; όM-ͼC%SBhs5aL '7Qi=5.R@KI~}:J] Fϵ/k_N4me[Ix=4~ѻAᅭ!x0zBi{ |*Kw{om O[y XW$) n<9oڨ )F]W_+jV1`l:`L7 = Z},OhOt5@Q眭='Wإ_L|7zW~zՒ~Z$߯2kD=O#cLwяwwŶMOF?owQƿ{d&f {g\C>*--_|\,DJ)K>U^ !Yc@Nmv N~[.8A?̓#wvT14j?]]\~ ixim)xm_n.v3Մ @!Rsv=c59:-.JQ>c-7=E9筥RJ2-CR_⬥ZK:, CK4Zg-}Zʘ2f E>o-OKp| ޑZ9k){CsUR0-=,uQj/}Z=g=٤.dpdDP8U[U2YUFt^Cl'[ h mS޷K5^o v`cq*Jhxq<ґ# )ѱX" _V@(0:Ա 41 ޕ6#ٿ"b{F2*b0ۅ0`ږ%w/)۩".l2"d0aq@c e_xi@|(oR2=R|Y]ɳI-qe#`Z_ۑ|>>lP,^0zFC0 #/`bZ?AUuh*iGsV\X"|mmMR*NpNppjg)rMA  nR(ŇpI:<ؗD6ߓ i-Q!@t21Jme&([U+H*>:(jLtYn.pi䝯ml?#}Is,Ѩˀ../v> zLp]|1~4IwwmV 2s[aߧW"-@<}Xn/'vn!Tk?s'zcy>j6U{M?{t7 p2(;^o6MG_ea(̾ݷ9BTJ@ T0 HxSN_I\J:3z7K{{2)k}~ep|pommc_IyFw RS;njj=$OJ^;ϙw>~ϸs#cy/1Ad$ ~s)qmos'0Eݍ`>/C u~C6~0ВPpy~x[ LX;adP\"h}f`A+=)UO}l$bUB5An#hDH=,#mY' O,1h6UWYFq0y&-cћ%~#l)qlm@p-6]@,#PׁLPQ!t W2 IX P(PD8hK0I !"%,VlI  uaT@F%HpPF &Oo '/bg~ ӺAΦUnj?==g1o/ cJBv 0anAɟdX6X}ZO&F.D x2oq1_6"^\Kqb0y={فX;J t~0LFQ?UtM{ hGLBlC[c&ÞyȪIVNX)nVJ%ĝ+4F H=+}]]isRܬ"Rܬ4_ogmn=u /wצpTj&q߾(7+% 턕fz_zVʨ2ΝQ7+MJJJuJ_fOj%s7+n# r;ayA_ |QST9ҌJvd6N;ۦ_y)4_ՁD#Q ~Y4 Q0vN~NwKmDr,ػZ@_G!suw92|!Q\1T/v4:p$ 5$h!##B`P}DLA'Q)3?x{*=% zTCeFq5 4hA$Eq8'Hh[Q)#C} (P7ن1uc1 vΉ|6~IINv`%FTQԭO%C#3UL!n>T% G7gSI{Čv}rĔrUmSrLM6ˠ))vRGҬHr}RʰS:v^z=NAޮ`zUcOH:r^fOjs䤥Ķ1ERLQQ~ h'[?QFǙ׵`|'Q:;͗j8 p4=~^9ο3_vmqؕ/"?Dê)>>(qx_ "/ճv㚸š÷n0t9"@n搫+ 'B 9O yZ-OƘa 5N俾#.8 ԪUڭ;:5E* FV-zpBmfYV$dV6ң+5eL@c'aډGHJ͸Q\!b[=BvRJͱ{מY)@QNX[Y{݈0T5Z9Xp!L(m QYH$!I}AlI*`m8>aH.yay4$`!h+b. T+$4OAXTJAxZg!#460Τ']zrlMF`fMoC0#qٗa3JA(*8}2WFSR,ҕ$3{p<ۆ{rIq 4!12 [oj 닍]zz[(4 hr2 $wlp-jr~Il]u^d_l/xiY/דU^lWᮘy ̧/TF#Ǒ\\1dVYx:Kʨ2Θ`&lyS_|\%Ih_ar4>ƘxM긷; MXOiYg6q:?Vi:`03nj }շb<]]^^1e \)0ɲf> %^>ꃖB2jݦa0T_MEwMٱ4);vXˣ_+1~*c?&,,ՍxXJXXcř򹓓V5|j7}C)/F{mJ"vC,׎5 PO!qi#n|]AT Gy C,,m&D^R>-.5ŌVèrAr).{ĭ7?K*ͮԜ(>̢']y@KzT1H&|t/laW)ft=/)I>'q~2j~ڍ?;(ޖ+{xh?5JP9iҕAVLЫY7cY;˯{vI*evRVs-cSѻ t>ǻmλtnUX;726sU]]y7ޭ\L3|[{[)1VjѻUa!D &x}; rIѫ|1{YJ1&q0^.iV惣g42j?^\ZF|0\/É>)xk㊷6xk㊷Ʈgò!Dg͠^!VkةEYVKٙ D?7:B"stHlًf1Db~I79g8<*r+Ώf6%VS+m7TqW vnS'?M%rKIMwj5 FWR(y%[{{+36R.t^uCdoe,s}p3_n;Uu (G{z#\:Bod;d ]WP 5J I"|F$*fG!ydynGѕ'#x+1gq7c5"86ɖN&6aC] %ʹz:%5rPa,0o~5ZKIK` $y;)iUҁ{kT 9֓&D#-Y$Y$JEQc/O4 q",bc) 󃤤Rs1B.euI>}nefWV&shp_-C*WSBLJ*17>}Xf)} (*Ǔ(C8Inmu)H.AR8PZ]7G#!PtR|X`R1vU ] qms!X )PFH *f]$7r.pe&О @'+[&w8uy{q@GU#XdDug^hV:!AEW!c%s:"C$8YsW#y bn݇ SH!wW1ss@unآ˩ D!c9E1"I("nnc- ADC4"oBFaQ"m G+A s(dQ)26f㳧EE:R|R)6OJϑSbO giH•*dg,NZngy:OW] :Ndqf?6S8)Fa M+Xw**."Daks(4aX8Ih=aΑleqeTnEQj``o]Dڜ.I$ij 1Dٻ8Wv+3#"³6lbb :G(&ߝYMR'#+j;Uq|G”)V$M1vuw[ -|[sw[ ćn[~\[$A8čhhiFF-*`c1[sKE$exr wq䡣#b3ahMwG [ɨkɞ.5vk"1hcsR+\+GI\]jIyq^qѫ5$[RETpHlSó<]r>E3rɳB8D0ayz;Vޔmj@ͽO[MwJ.v!oIzUk9atK:߈nѴ||~[@hwL-~v9aWb2($}E=__Ue%~CPLޱFX-x:Q2iWcuYmhRq>8)EmX'waB88=SzTs1M H*=3dQ HeLa ]&1_\i`>E6[F)nB7֢nM@-n3GK9\B7oy! HE$ KxRې a}M.`8(BGx4R2nLȁpи Fwk݉ҍ25@tck\"nW:{U!F@4S` [rn>DERju w&6bVJy{u[{)m/,=G3[!Gc`z&E6Dak;{K꽛gݥF7V#5E0,\:p%tm#ܔ:!E 8SC@1]T >n4q` }ގn"ڀoDf_ާNG^ƌn} C4 S(g?܊nZYDTp6*BKEB8D70ELLQ( Mz>HxN0uc5C\SM:` eJfYW_NJ_e;;v?u%ކ~ړO%g~k WOՂO 3o>~}z;R k}* ,Ċl_?wU>zߔ~H0-ޠ]WSi5Ο c w;48Q69Ȱ…E%tpjokuagpM2 lP|] ޑ!CC|÷ç?n8,妰[} R|/[ eiVb-`ru ی*p ٌ],.ѓ>T[=ӎ-,ybo>鮃Bˁ'HZ)#_AHԙUY-0/š\-j 2b7YU,J҅JcxnF'5.}.+쮩+ş.CqQ8[]zWsy`췻9󰔎.P?o%fc_80ogUuaǴNr-"+ ,}^׫rOR#P]=_mYq|o[=ޮ?oBZ[__y>ȿX!C@㇀m4> ph eSηZlK+@9k@spěX Ps>*JjO=bLW0;+Kp7GΙTXA֔R05B$@e*w5ڙZBI`3ڎF߰H\a$nW׈(i.ks84MU-pZ(rOؤGR.ykp]9~N!wBQh̻^TfV4̕t.+3^X܅RG\( 99nhAd@ܝ>`q ӍH;ְIz_EQRL'ssb9zd;GRc6,i'343ּrp8|)wyHq"r{fS݋r7ʣ$KϋPIY3n|!Q5ԑ`Hݮ9yPnAE;{08)Mkncuc8v[`4}eCx60'҆])]]jPu=*Dڨ}X(a#]bj^! D̶21奥PJ@U^5R6'mK+1m#t -igLڶ>!)";LP ΂2,ƿoDR-$ن2pKJB8D0f3FS'ڀ֜(B|DƣM%!o1VLz>DyN]`i<ir@:i/:^?{SfgehO Ր!x _5SM:*yX>Mu5T+oK,J~I^_?2w4.ʅQ}a1__?,v}B4\U]L)w&Qbgo28*||_\tno7wU}{?]7'yW*Qnwُwy5+>g]p>'?lv9 ※?ޟOcY~F+Wag^93:x{턙eɗNNEudzL+?ɸ+D.JUWE-)DzΪj!Q2rRZrJ >:؝Zo;c dx(%T5ܔJx!Xd9o#6ܒښQ$J.I0 *Ci#5B LjR\(Oy^>]τ'lu-zg=Eh'ZKabO٘ 'h͑Liچ)Y"! +MyZUYe%?{m~QC8@[C`7I[YI׋+=yI J]X"ZIbRҕJV+x][B )g1$k% u ,PpӀҁ69Tee [ڒ -)C`hAg^04M#ą7'чGxvu'MBuILຈt4ʼU[[UA[#ޗTPVQ/TPqb^"B@Ӎ:w*,볬n&_,Gp/6=gjv by9˾_9uD:ѿn?A5b<OH?]4Ag|=EPir0&-"q%[3s%_TOꢦl_$:y 0sMIv,Q⢚+k-3jœr1%K.S1FNu3& $*1ކJeirThAx*}IʠK+_2 P1ZTԉ:6۽^ҟߕuo KO2LRs) "5ϯۇ(3f~U>:A<'6C7gRؽ6b|bǧ9GxW[;m-i |zϐ~ch Cel.5AO]o DL" 4ۿcqk]aM'Hf5RS[OMbyHX'OfŸ꣊^}M7ONʿ}q7aι܍{:?nwr)Aқ#!Gg<5o7ww~X h:ǂNIhLYJ>?bsZc>Hy~_IYGM4˦9wQJGA>wۡN,6Dx[M4˦6-aĹݘj41ޫw tλ-`L7r_KnMtcb!;n\;,mm!m:I/E3ѨO:/vP{xO#w|=fտh֑G]Qu㺠aK}܌&߹蕃;|t7P`4U|}] QFlRso`5^J5]*nTtS ;4^޷ZN0>~*1 }d5G%5=!&%A6G>G!HFAS@\i1cZ✩a` Xɔ[tuAޙuTG]ө$5x PjnGOYhmp->+xG_#nޓcm fCIMz&P ['=y TcprH-DqeAAFK )65wi+"MYZ_iǹVMT(3˲BD'% 3ךƯx_5 Kق$ggF% Mk@ %J؊['p^Ym/q-@-Q5'@Xj 1|E#$:t,BKU!RlY@$h h_ZA?~ֵQ!%F+"b$TT[*mT'?R1YoJ#9pd(LnຌZ1hS & NBveTrqUD*c4S<_p@&PEH%qz 9O6]vO%)΅&Z -qzE~bO &鴱sԟO~U[LR^~ۄ>2끎zʌFoLC/4Z÷X"%uha\u~;2 FgdrнuLyאmS5sHLiL%bRvCPvH ڡzŠ[[; UYTG/݀Ib3.>lEWr` #;^_k-`3z=lfT΁|mWmz~vypZ ̚T;LgL[RI_iFCL#:+W2ꝛr1BJ<젺 x_0P& Ӛ9N#̖8d4` J)Csա\;}e'nn v],B!cO o.lʢ{pNB\L*`2 f|m,`ӃW Rky2v> |*03N'S y&eS2fp^n7jvAd3Z-B42 y&eSC8f國A>wg!;n$z.,䕛M% 5`ڮs|)wX.ŅVͩ9޹ȊSYb9Ԟnt;|Y&miRHo!)wAoP Rj&> {`,Ml2^oC%Cetuh2@K,W`RR]Z]J.AvbN~iڟq@ðߏN^.Kٳc]2 VPxs+Zꊶnkn AH'vJ* ꤣF `%Q L$z1 9(emd9[Eb/(7v ;5xHt>0u>s2Do mx h|@wQ PǴVg kE6vQլ.8'Y'?rK'zO7dz:|gt曁=ӻ{>=?Ӈ="{ܐE<)vg{09}Ĺ m@x,svͺi(pOf_\l\ vW RsrJRL jWN:社 ГĐ^ЂabJEb[K c'+=j+HRK社hdg2?Z8+}^yA+rңR&ҬhM}Z)iVZK M̓rf\Mrf l'+=F+&JA}s 9rcRuAx]=CK}5/50sʞJuq'KyVv ;qNVzV=ieOSStVMI\z +>7iVZK-9DKR`o$eާK҉ԆJubxFxƬl=۷m9+ 7=wx+8iyaՔ 7i% mg/R/3W|h$QSf޼pxOCXG]:jt.RJxwJĢZhcljyh̶t1XYQצV8%ZyRJ=q]YnPwt,+^dBW=QAl<ٹ^BҖ\5Fa>~ѤVzta[U1:92W)b,Xݣk%r\)Os\E)[W|; _-[/D]Qkuum+IEK ZDl#zZmӔih.tHց[i@۳4/0Qpwi.cO h)i~uZD'2Q'`0c1m1  |r%ݥ[9ǀB d Sʛ,Ӈ3BrpZo—ܸR4KUp5V{_ѻ7#T^C~`I*Z.)a>IQ#V~kjو{ramu8$l{OI59l{uN0:aKN O H.Y]Jm!(=\FRmV2@.R6f9[ٗ9pE0$xUU,~q†+[Y7EčhoK+U5qÿ~aX ~ӫ.z oG7UGoO~j}?lc1'CӉcT)2,=8ӄ9bБYNKj/ F0.~ZKkv\N  3 #0AIf9ozQ^Ѫ2)R0)= a#N( $\PgֻV[*$֭wh(1\@{*=!T,q d1\(cKpUDo¼1Gtmj4rx]!(eu_g\9` >^Xu7G= .:wefϤ{OG6Unu @+v3YII7_Xs& =o֛}C7gT!ue߉َ(k|py_}?>b .Zh%X8{Ł-_${g~`Zǻ)};=Ke75 ](f.K?\e2;sLV82C !a@=ۤ,x/Az[TJ:^(/p%n|(쳄Tx4sa Aȑ%=,gIR#S\h`K%{(SHω)=*(쳄U^N)Y}D[xB%R}@AVjˊ*W0% \yaJx ER=lJkcv3H X 2єP _ F<)MqN j[R`Sb|Kb;Y2LQ#4d/CnѢƄ9qތLx{upfom/c0ʷh d1-d Nq%;awBBlۡN=́{3zAy$%)G0"Y%,y$T"%OA% A3;}Wjބp2ٜI 9DGYL ,1 $QG1Sk0P@bJأ!5! BW@CPa(OSB%~D8,S)b"IF,eeTÆH(A}[Xxɂ:,(jaHa( eqAkIz׷Rr+ KIli#)B 4f eQ* "cLYGCs!QL:tSQ{C0 3d:I.9/2d*t#TP)HzPH{J&1m oӢr-|{6\gD:ϓev z+ca-uӑMS {a1xt5;50jS=J9ǡCML9g^J) ^P r04ʉ4 DfGQ 2(.5 4(zia+(-4 fB8TmN4w |C2p ^&&wmj,-Ê i|A}[zb|XH)ƈ_~ RcX&p,o E ͔9bpgf͆5 L,> QykH`M@ ^.]Z' X:_'@Q0kQ{@ ZL N2KF5ezI+F y|$ׁɪ\F#i =5eK#<$l-ƍ˧{b3^E[qb0[w[#r(Q8h5iYSp7믵*Q'< u9z|-T]7+>p}G FZ/.U:^֣qO M}.ՇRβs3^ou[8t6,LLX+ݏ0l]ae$!o\D]d2<ߞv#o@Jq}vHx Q=P7.N2պ0ѾvADt\a݆ iVwK+  y"^(t&L&D@𘯱7F#ĕ:/ɐ++1Jɞj{1ԙ@38Ã&ϙ'1kއCգá Vht 4CKȡE1oPS/9H khQ8֖PSJEJWJ10, Xw=MyBl:Y%gm7D~MS[)Db3};u ^2J>8ߖ'HTWl47 lp4liho1(SQgrpoY#K^ȘΚDdAQ Is0B9yj&:40/ `$XbDS}QIa1}=kGD0 N3Ddr"*TgEN1D(EH`іҖJ_@OBIXo}I z$F;w~I)x%@\pSR3j[*`R+xD-xu:GGk-ٶTl}1O晬t^UL{k*|w66UG@2hzW;āI-S~P8y|y,QSF8yуO\wtAն /heQ[բk̫!G !Pm e N}}/o H"!e A)SU %S _e&+4E*!a9GAQ\X@-aǔ]AKy#pF8)M QFyDX9d$'SQxJE3iݚX4ѱh+D!9I#$!r$ (g Bqc$wVbxυX}բ5_W>O8uX^ki,/ҍ*u!> +Zv=v}g@\ppi-$Q6*\11S_/o-QTi^oqjrad)O ">! i'9C֒ ͷO+A@@<ݾ{C 0_K|?qIג~z bȴzP~j4| n3+wXpH&IH)|\LyZRG!$J [R8>`Bk^Kypo=2sQXqrtg6.!nj9 5>j E-L gW1RulL|1`W }c^ ,h3 B^P{F@"$?E rC '$LVgrKkઃ,=+_,AYk3b5 SCrw7FtjHt?v D\tW:L,kmOf ǻ1V9z+ab$8.NVV!<<43TCUp0[nIYE޸~|w.6~Ԋ 㸾Ÿ>KI$^۫i)%$xYm=N B[}Έ {=_O;]OXX#!IՎ2|#Z*}*E5ԺUe\&I>pH\ʄ e\Ԇ혅BR6cfշ#|Aۄb`6訶=t{2M+k \]tHUOyZ6Yp/^g|\o޿ 㥍Ƶ^<]f# >OAC(k_,Ǔ//.zٺ̓qu-1|i+*I`j7W;l[<ݗDng͇qu)Gqi7 *'+ LD6mS܋  y"Z)[\rxfu>d^nջzW$HWL9E AFN6R}t~`0$1 f}3Q&D<?x-WeNt̓u>Q"K)(|Ҫ2ˍ %$x?w]ޓ)~L5lK-835U^`刘B?yEޒȼzo'( XCY5Yۢ"a {91@IN-:6+m3'$ֳ^-7jj{"(s|cĐZF#±S]ioG+ ~Y{gyDjYX`g%ؖ8ͦǞfV7ꃙUcS*VEF"c4F1kW(8FiLסBy2v Di A i3ˉ.KlqWEXUP^Z`hAҲ,** 8vtNsl9vvQ6 |EACdIܽ;<2So 'K\9~PK*wQ#E.Jh8;]hpEX0P᪱})gCrV~AzorݝY ^=Y>n:ofgy#Ϳ*{ ^#ˏ=yf [Dgx6Z=  j,w͂] ɋ5v$ԣa Tn5HX}J؇YBmo v]V_Y Ry9+K5Jw[}ѴzE7CD}.ZIQO-ӈ(#/ٱ tavVN(=DrR./p#./VKN#!T!2ܯQJio S_?{^u뺜.{{JpMWr6Ϧ~Vw/aҚ;+8[U;\fJ6/a4S'D3/#HJ Y"eIΩ΄4R^*yVUA"oF?7RPs=7[*v#3[τx)1KJ )y)jSi-n\*YR{ Yϕs~% @:ƍ Wqz|d{^>'!kIS~[6 -+O [bl{9"[(i awy咶DEtN{>\4cx7B$LL]}Kρ<%@ȑC4Sz#zvӎ~ǀ->ЉFHvnecni٭ 9D#0E9xs-Y±1dX@'!mU(znW<[9r60YHb&@AɈgv OA>hJmZa"&7Wejsvvo|pbO8 g~Ig.WO_^Hn4{s[d@q]{2.p>.%(jrH'gm5,'JZ^p0ўjAK= ;Qwc>0@)%~(O#aO->(%@ZjI-Q/p#./VK R~(Ep9=s:w p~?W pv[zѴTH̨)V)wz^(7?B !ю P!% MX $lV %d~M#hѐv%Ge0inT"i,0Hï?fK(TdZIm)2fycIMr\-xه\<9Pա ΆS| 4^aJpA{L{B80#r\ xRXQf'#92;%5I,q=!hcYh{!K(`Ѕ64'+}-Ʈ\nETRe\K Q:gS 8eJ)IHI)W/4EӬ0KɊRUfMRL(U4S)r͇' ;s./o=Fg G F$Z54=M,d*@FS)+d)Ը@d^13WA+n:5JO֒U(NQzYy:?|Ƶ{"{n-]}4YOB0Zej3˴JiLMJ&AqL2gBTxʳʬ3ijҗ6n֗PLP*XKF3Gj6P_bNJh3%jh-C&q1bz"H32|] r/e4o>+ i(9 L3MJ%"{y}?dN=3 5P8/<~2 (sb^7B7vX&. EFKR6Tx* t!L>z+-ʛB.,\êatopGdReHپ پ B)ֵ ВPw3ҞEҞ0(범@:dNxm·V:B%ClR-Ķc9oKP]jŨl}i5,c@%X$~#Dc|jchoJO%.r11z=INQ|6ܼTCo-†gv!GL)F?\nh:~#$ do-gv!G'dbr^^ԷkޥoMVQܳڍ3{a 8*b9KR;\CE.`!T^,V#CF)1A} $={hZ.pJ)Cim5R}=p O@G3?M珦:Tal4t7I@ {;!١6$0DH7a=$K6 tCU C9*pҾ rt~\Fp,o˶!WRKɦr@9@q*2T(o4x׼\/ {gb ^azv $Qxf;h͝~UZN,,2HشB5yh mpݜTfM rgEX3r")U#\7AjM5xAP,9ROݻ饦q֘C2C&R@NFQtXFOs᐀ff~1s`hfG{`oO4| 5Z6`nۺ]$_2q ~ՒSâP8;N~>i{.=6(jAe䷽!\юO}N2t$p|/ AAcn;!:%=/;@?D'̅Ự@Җۣ#ctRw%p֒A5 [Z]coK@n+Q눿Ss@naAq |r>!Fp?|r] !)G|Zv3?":nνaLK$=[9rFaz&8 8^Dn&tK}× 9D,$JI;u\Qw~EыH9j[I-sWyt=.Kԕ.3My*-2KEEXQU"effQdOibȠXeī>H`!Tw{yi@uY RTn'6i,w :`kUƸ+\X!,8|XvqQ"1(훰!$Y=echH4 O:hSͿ7tܙB}(hd@FBbŊiV3fSh%ENTQU[=ԓ|YR0B#ݿ\+P[B¥A 嚽T!5 62E$r"S87+QJ3#)/?{Wʍ_f7uHV(c_>lƳX` A0_lx@Ovodaטr@,#"98=,~'fd<8v&!01΅/ NZN6n&v,рѥA[If ipxI pxg8c`:@fQ?G +,- Dj);Z>!nC% I;v;N7n1Bu{Fؙj(8j z垷`S֦> ]sl">D ,j[܃ q++;/}>Lv ؊7E_U{I'/?=kYF+YE-4LEUx}-Nøt`X!/JZPo#2o|KhYUE^0,c,$'YiP2$Xds )ʜi[PE)t;^ 0w3qeG1j1"P}Կ`=!N+$Gj<=U4Rr8<-{O;\V`b"q0RJFʺh.!Pb!X#(B7l\,PBl.ȉDHIH'cXr5v04﯍3Akބ |bRCZt;/1hǠflj~iW*,\ْCMRR3\kZ}a6Tr^OnB;r5c]OnPJYtWK(gEwVq97ܥ*|1@2rhU` G`T (Z"yn-y;m_]㭊Jד wчۛ10hf ƀX Mi3i~g#Ş`Y,$% ;%$ZCfūk>nx E$qCjA-hG-?}Ֆ, 1d-]F2A6n+&psc=`RĬ}ׇT8A"GCYL}tiԢ4/Ho#]z*"m)9ŠM}`;fh8&/r΢ѣf2)ro'Qßá%knŷ]bG*3a>xML3͜–Б HeͻpߘkXv[ۇ;lwz2 _۶I;=~z )pu֭`yWIV4\󀓰 \ JE4NCis?қ ]󄍵:V6J/.-Lܥ`mdAB}|*1%%ۈ-ܼ TugU-$/H+Ny㔒Jϗ BਊR|FysRSJ(2ЁFpPQb *3(D_]C#l8]2@: ElCA؈[90eN6KBZJ>l;l'qHEqr_n਒Zv%*,(#" .wTx"+r/I;@|گȫR%@ `Jp i tȩ(mi,TZq*OڡMKgAhti*ΥI, 'o§F`M93 }DU,0,\+Է[%zZɯIxϯ<20YaϾۂ(s0,;*2ˢd8) Zɦ ϴ x!gy"viAkmcd%0Xl4fv% [2mdm$8] `5/ lP "{-M`L-bc>nY,F?և ‰f=qXN~=T -Xp0YuzJ0xeq>"a;4z-Q7ժ JWNSo( 2slmV(=//weQ"w".20",|fQ E$B?XHpVE#!P_LX[(8ث.0;~_Ar8+/~LB5U,^2?hPm2GZC2 2뭒TjХ,rN,@X:Y(lrrv- 6C^(ћ!a FZɛU lr)s(0KҫL[*(-EZvй)HFhteV?~/=s(l{Lb|޾~e|;[5n(o+dB$ :m6SGSCo7lm#Jءw^vp-Gi=Uϛ싿}=y)%Y'WZ-qv7O-^X).[ʍ|q~EKJ&}Ė&`mKaf xqK)D30. kiXE:"^GNicmVp=rMA" -9$s'}fxmpwP5kާMq."uchh6xuyf']ۈE8Os5x[Wq:}mNIK+h$xfCݍvtzRL ͆ZICNlx}Vƍ.U_BcC6muZ3/a9ҵƎ+~;~dsscv<Ւʾ͏0r[S FuݩfmԱ7V[7u ;^b}a3>ɯ5]!@U m kLH#z r0\=>?RpBoXoq:nA]9 ϿM^Oauxxq8$zU 2ue%Ըo~U\fL1~e}6|*GH^nB<֗1m SѬ[m UND֭Y{D֗1m1iiح}_uBC>q])S: ՎzV; j۸Ƀ*iܗE*FYܗ=Y-F&8-ֺQwҳR-B-~-Un=fj'bK[K qJ'RiiEu-=g- ʫqPFuxzmg&RKMJrB}ŏqhoҡJoV(%A- ;BFfw|GšHh[$o{EFӋ#$6y%2^BiW(!һ5±gcM:A8 ͔= b㳑D,;Ş*USE1ŌuÓbj\JZTT-WӞ613׏*ɳ%kU]*#JD[qނbmAd$y>BH!R8<~UrzN[$ԁ{\|@4d%.r.tHꨉvGqc ɒʳd!<+Rܔ 8L3eRglL,UX8Q>!Ϸ>fhh7+Wt"oH4 :T"iHd)-U2we.T̖΄Š5Fzx%3ظjg~ "'-z/ϭޢ :wo1Ho-@[R%3Q:(4zVKDΨR%J% 8r3ޥe԰|۸=Nv9vH7@`#g,B;%W8rJ^d^GZ, Yd,Ǣh"#}d]6N XY쳠ȋE9krZjAFk4bɸkո:Ѻ M4ꍄs~ӊXp`"8Ezlm.XI,]Yo$Ǒ+ļ,`Ky }} /fcQLkf8: oV59ؑu5٣$25ea.WW룹H]i߀6kl. Zz lg89lzd3=<H h>fD,}zv/c f8z3)ìFAq%h;tFWgSp Tyw\@%7%M`ݥK{k@:j؁jO\|mq4-)0') ړc1jXMd|aQ?+y36|mVhygD/6!>0~G ?BM3чMn6Ngd]:uvKf1"䍇hRL oyš֐Rm ψn "O͖;Bmy!S )9 C%";k~JDcT"R$T24@%rF}5V^tE, m/I}/5QzQ.-ZL6 5J/:JeQ[0RFYvR3+L%G)W"*Ŭeok^tʎ#2MڞJx{mGYC1)FiI.q U8tmr<N1d"+q-㒉$it&)2 G$Y:uwcN' *xjRӤoO\RW䕥Z!l pҺGSo*p}yr&1HAG~]#5Q/)r"?L/Iq2^z&3ڗ:`zHF޴<`Y d#o/mj'&1q@=lAdzlo*ţJbj@CX2XZrfSBesֶ@Õ/l͖s+(V _&?m 6Fzl5Rdsw ibڌLxT0}*BJ2Np>"TWPNŭnJ*8G$%iߒ3v!,k?xgJKu 7G!x"x FGgc/80n` A*nPx]I ȗ]t!Üviqg3 F>pӀ)zDwfRh[}$/,̏Q}|Q݀]J Ju4JuM:rEh5Ǧ3#3Ҵ=ֹՠ]χrjD|*zx&8out{TObӳ{Ty4nz(yq(yfh (>y8b:'OjE.##ABr}n[ $1mW0]K$my!Sڭ~|,*%\%@41&~FmMOS[%7cHn#Dѽ#gao؈ղS%o8X!ɔ U)R{'5^/HY W:T +;z_((e#R6UDKRKm5J/;J>32)SCn}o|G*?ܵ˭ov%^u?{(oe0s-Π7_?=C~McozduݫX,Ԍt]\X%ao u)FUDi^#c] pNKVf[N䍪 ;"Om/wuj}J^ !&.0N*7ܻ}Y=a11Gi2n|"; ./h (qo^-Ǡ#%2 Ut겮W>=d:(1(߾~Ϳ_#jk7: SݦP`K+g<54cSM9g/k,Ѳ%.:Ҳynˬe^✓힗"τ}jѾq"d7‚4a| PXpk\ N,L}|KkܤPF\z.QS5w;T{Ю1j4Ֆ 6P90Gg;Ō@:h*uD2uCU0oZ]DEThMCUXUآpf$)CƘq%[K &="KvGP^ѨUc2 \>|ݔ-MѨ\YMihrd=p۫~]pSh2֑Zj4&+~z4Fk Fa$,26,y1F;]H&kJW5JQEXT҄i1* UNfti'vq̷qF-q0S1f=DsHg2. gZٌA_d([G0 g`8x9[{5`Tl{n͸ۚ9scKdQ;:Q44 @:0E!d5n3;3DRx@ޞ`֙V~yn:0%;yj>Bs̎$w&9L i$44ݪ`^ҬѴZ a`VhfyNTOмgɎVp0ۧgq1FɎhY)bs`itTYW(GcXRǺSǺq&pz]{;yd Js632-FGh&p 9äNQ,5+CaoeKMzݐw̶#-_ݓڅkK[,3bڭOgK_c ƈ7)1j+jI~FLv[pu|y^$ncDѤ2av֐Rm ψn +e?[vK/Bf1"䍇2pE~JRŁ7ۖ*.Tx'ŠfZCcceΒ^(V6 p}[]^&-#Ku9S7?Y(7}K n+S}oWH{|h3zͶM]C}˒? R~/۵F07M#[b=wNMvݓwܯ;GaZa;ϼ5}_} xջ,0c̽kZ5>^h{;5WoH:D4\%#(.[vyiȼ^Ȝܑ_}#iNcЖc}kZ xlH׀8#^5 xca(*YS%Dٷ|/x!O b͕wkNY zufZԩGɇ}åfC-@j/:a ZjUGzQԷR3@˯QzYQjҧ’^#>)pzlYʇMؔ4[7'kW1$S<`U%!_%,B@fI Q[z[ƥrytS"NS߻UH]Xyj*؄I&A1k`sTdo6|i÷/?uS>E `w mW iAo]~ Ch--z|_N+3?%< =%_^ưnbM|s3Ril>ӎj-&ZOJ(7l>?2{ՌcsM>,sދ_Ƞ'sTפ8M 8@Mz{Bc8s^'Q5hfQ|èk?\b)h*1JXGhJPbZ6ySt`(Bk_[W7)jWh[rk˺U6Cm1#&yzŶMF!5C S蝭AV*s]SL e ȡMmRǠ9cAv2ۖAA~M OaQG~(9G<<5|C$$vюG^+2y] CKD!h{©cH~ECi1DLn0ޒ3oF'+[jǯSLl&4 +4ѽl8gϾfj|pc' ѬD!y$ǿDU0x!LN`28Omݶ/m> k?#)Ao}zv7#0ڣ!{JoĹZ{<:sRcg]NfajwzFVaO;,xA:>#dRsֽ6Iz=шPZ>uv59}D%P SdžSJXzM4oxIs??.yR^o;aRKkEwtF:ԁCVg_~{GP8- NX{@4c]ml;Vf]j:0қVdžC>&5w=ɔq[~Կ7TUYm!B)º|SUaD7'>L{g֒9Z 5,||*gnWwaZ"պ|Y jگD&kpEu#}-@pS2{[sO~p<y)'Zh+,4 ry5)wZQFPET5z|I&khMkSk(-}G/vdLZ+.[d݋( ]-i4VM 6>_]("r^Wkh  Ut]v /2v/FVB_k|ycv'5SoP~ OwZMß;x-r_'UE' w.7M^quHe^ ԍ5@xwHN LЇ7%$ Ėg{֓A\zT5D,sQ+L13I0k4map,pŽB5uX=٩]ksF+,}&YQv]]lgSb $ Q\IbR0=̣{0=/Qz-wWᇼUHq "H c];@Qz,05!bpr{gqYH-9#ҴY0 GJػS@e/:@P(Vkg6İbnEԚ8G≮[Bj) ᰷ 䕈\_ȉ˫`O[[3,.&(5y11k<OȔD$"eD$j N?G1!Ֆ$<8H3G.5'洿p}7mL<@me-hYWs5<焣UgI_Kg -CwieuHlބpypJgeސAIMЌЭ!dYXܚc Ag&vb^.JJR蛈6 dݥ+a SJ}hz;aY`4~ nB3D+$͆}p;,$Gc[O*g֒R `BĒ8` Z`XCC~$^ \5s}w2!B2oH2=j' o>5˜9OPOvW<mDEl؎N2IhB{H k}BtdD(e Yǟ0HUJd'DO*3iRMλ -FEy˕goFNsm67 1mm%)GfǑqdvdoa˜ӄ(MM,R eXiFPb9.˰DDJ͙UZ1յ]~渍`ŨVn˃6b`esuymu6jkܿ?4b>ii$%<7>Aӷ%u9 :Y A:57aV~OGiz{ $)X P.n_Վ6zQCۿ1!A m$\4֓٧7-ǘ QTJj,ሊJEddgPD+x h-Bp rֈ𵸏cZ6NUB9ֱYliHD&$QE!J"eHLuWE 1メ+g5Ewi6tA@4T1oo+ eԨyJke.f`עH0J4X#lEIx0 2hk0pNUo&h(>n0gEJKqD rb6A+NJ'!wg:Yxi3 |Wg=wٸw估oJև".DƜ(_V(=3F$(fΫc^B8)O1 A0AMDg3᧙82륾Z |!s9dG59aa:uqPM@TzH*єaDĘXKCbড-&.w;F$ZXfB[amE[y=l[x?`B!l=MBiՕkk燵b.~Y1}>mK?b$8V'^h ֚V$Sish$14!1b Pb(vKf4MFbu\rZEP(OR$+K7̤P57 p{zgXg=+pE~Hݾ[[g TU#wiVRf~Aoisypa`址+ds-H??` 71 /%Kg@x)FUV`,<V#RO\'ղLTCS)Ȓ\G/دR+Vٺ:S~f]9ESWB[$?C[,4$m7L7mLRsk벰 &^o1a`y҇<Lxo3V߂q0w-sk 4BZYBa&GNKpd-Di D2]EM"eL#n^>èц$IH&fp(kd`NxĄFUn&@aրFx MɻsvM+FdsTh?W Q#,Q\NH.5tsӞNU4ЊǢǵ_/ͥJ{)RRWn5ٚy/%T|lwW$X# _{qqoq7NäA6EzK3S`t@X%RR urrFaY}`RPho QM?._^3OYY4A0{WHnݤkMz?ݚВ&C h٣iKDR6r6ڤRm$, )Ş*i qPiBJb Naf̮"ˌT^h`Xl`}:i#QƒN7Az%b xxm-.[=౳)\~Nۥ=+Lr*pJzLRuJa.,TVpnŏ_X%TUMИy%Q6P6 s"1rG=*n|^]3){q@ ^<4 JL%9?Eu>Y Ntf(D'Ϸ8>d4>з8X_}$i:?uB#s$UBHMTф ݾ/6jX>OJAxݛ2]y@v]8\ "x||:?u7qnt7||c|k=}>T~ywoi?C!E/] e1dWrweeeBzлk'a/y)7:--ogz$*K:Woڃno#@-,9um5M $UY5edy'S̏a,D*\l<;qo81i.jnWDe!vPṨ.ZW v[Ş5VKL$Kry7Q]ӱI9MǽFV^iaM,զL=QdxdHw֫nb㻸W\~w/(Kw!on+VvU]e!jSfi9Cn|y¨ A-Wth~6i7e\ב3 \us2N"hJ[؛,#Nn%U#ǕHa5c4kSi*jm~Ϻ'r(7ҡ,4VpbOrx_na.|ȳR[X=LWqhꈲڇEu)ȧ ^Q:#u.^:DrfvwqK 0}#[ŏ1]ۺu[ ΎdtB$LDG1撊_NFErÇDwۼ:q]W#>adhw"Nv'C&HRa$G2d<@Lĉ5#XE)kmE[ݶO5THgLB@*j \J7Ppj%D"16d1n>eZ*81Y1RRF`P"q$Ga+91aw xω tܪU#ⶳ='yKQ/ SsRs N:xE~&Xahi$[e,L,š#0(+CW"4&OroS"oJq^gvr]0;\-G s m739 Z(͍yxާ[e4t^ݘs7ʝz37@&cN${j@&Firql&`f+m06+Նі4>[a.y_ᇚ.P2Y=N`f0oap=ۆ]Y `&|qv!g3Z?*vJ0$`FsP#͘*s3g]-%{kI/+R x[34nR-ߊqԅ?NdzAiz2Qf PуϟsYdp]- DQ+ 1k cpiNS-Gʓ#(ԘuR_-K8:?m5yd Zx uqs?&J.>Nz& %;9 pG,ΐ 7gaC3k0_.Z~e];cMP'T%!)+\pb&,*aHkiFEXeF}^&&6@RlRwے'l~MvؖTD× 1gc"zˍր"ml{E=PsrڤE&\kU,BNV-J,?[t[,﫿ԪegDNDk]ZEAec<`TRg[J PZHQ`QrѢ"YJwˠV(aLni8qf;Iq<ΨP7%QJ+H.r%dxRd EBW򟉾ja9TEڠn@J a!*>*N'ҵ^N;Ҙ)MmiJ?#'itj"oLkAS!4GVmP@%( uiJkɲn$lgROk6((m*Vѱsf+;OanK2 xJI;kHUUbPeRٓ) DmR`Cd=mHu&&4 f`ٮbZV+Kڬ8"&Ucg IBVr*YXJwd mkV@ CQilo~ w5ĘhrW9n...~Qb'(fOGn+hP7>wD[s0 Վ?௟whtDieZ瑀ʽƵv=8/&?k.;[D<ӿݞ}t!w3kB7 Nv}}͟;cht~Vg0ZO_i-B3􃿮CH'P*BOf K79D``ĮO(Z':mN̚ bw];+b?=[nzOW _*- \ᑡwW1_{nt'9rj6qXnF/L5HިZ?C L/Ŝ~vbeŬPlxXIxR/mވTV^[ZR4xLHő͎TrQRJ:f5(tL(ygdYAWk5{4NJz*pZN(>vc\t[=UǭP'=GZק[ wDO4G ē$2b&Q;jD3DP*ޤI°6R<,Nڡ5*VLW6 gȅs2*GE00l;,%b/D\:*Y F@!DNE(e&zH#DM;F"r-0rKtB ۍj!*I~lWNw뻹EFYkJ}^8B" LhUY,HlIE=*)bU}^jUIs4HޮU/ZD*LJ-R?{.2,>RZh~9/u.3CO28+姟b/]T]9k3$hqumOpR~b5vyg5o_k]HE>14)2d(erԟg݅oe(kp:@ڀ(BcI@A{u.o. Kp#~Q;q&4XRT(, T0 !q"5C'ё t=XEi`d˯?=roi$t2eੌVe#IYoX0̦OǕ18#Ã=۫'+G X%]-AФkF]i\9 88 x}fȇc&QiKJq~?YHzF??~퐫'֞ r& \q;!.[3%NQA*"nL7zgלw~43uwa{rs]a/7W_|+Q8䒁̫XU:*u3tm̗*%&a,]N~[BY\VlO ]6Y;ǾVjһ~d)xfrۈ;D^}6JX1@z+ik^Y85gQ+܄+'-oEy6d5 :NG۬ƽ#[POڶЈYmE" Z`B+J" &o)Yeհ1(@!~Frͦ@Ko7$o3CV6 FO L(UU\/S}Rے7_э [捍Bֻ!'ȍ{\N εּSGkk9˾qn~V?tF4O[,yE J.ǗzK1VH*)Ȓ 態S@Ү& ˔CrV`9u|F( T7%j۲T#+`6W+3R2YW( @-" ̙tBqV8`Z߁8F3Z:a_|'"Ԟj^THTdHmqLJAƪ>MvK{?E2h^1z-9s9^$;ׄ b^@Ԣegu 3W1k}xIvgHPHhY >1H!j J<9{Ҷ _5<'I;DǏvn ޡ}oGl47sF3,Di&Eu'kl+`+F#Y g;@y#L@]_V9y^;̕B\Axp͜-wTHt`GJF;HPwON~C9rJhCLx1uZLtvuSRn D8ӧ4 j )B8́L˸y]7:^CD᎐\ nY~Bgro~z[u7@0 2|y֐uqo`҂ð]ITZFXQBPY-kw$Eњ%a~a 5NammȩHٓ|qp58dnfQEјNw&^g)|vLѲGK08LQUHxYP2B_eNF W\]n@l^qN5Tw[[B$>强HY޹k[G:(NN &BDZERv٥DN\{{∊U,EFW*bp}D&GȾj`-]cp 449J֋jj hd$@UD(6OWcuT`#wp\wRl~x1i VI1Xg)lą^{y{q;a8B%V Џ 2Jdǚ[*N瞌gvWEe,@BdFes%U[Ja+/V&Wo.QJe+[# [xTo^:=u6'tHnu-X9)(fwLa*>#Jqؒŏ!D ui t12p.Q,-cJ@kpavWO^Q!7B"D},<e3sRGU -!{p2X9Z˛~7_r툄̅z i Ƹ7#Y:/f?-%$$;[`VIKZ/d*&lHT1*)l%0{.ٰEdPb0BVGwM: SԨ3wX*UgW&9\q U> ajCmP@ੀPj? ;ppZa_C s-s B,!aཚu][o[Ir+ָ/՗2OC$A_%Z%l+O7u;(Fj==b됬U]EF1>vn{ĤP}ث'kki8<~fh+_{ߖ̟*g:[~go^ ~Y5JP-Etm_&uFp %̫K[LΐVHjz1ըcފUc"mLJCZ*pEV&́M(b!F^w%OaxCP; Ry++d?6ik[޵E!淋m89v'XW ;LS)u/Dgx%N$YesqZм-j,o jwZ@(SB\EYFlA !ju[*9㓛 諼_ìqǀ'pQ_%)$xgw}0;cf Y /j6@BmPe69z:>8mٗ _l^/R b|/R JT9,S3+eDK9m9-p l k5L(`c R/YB(ݦPaIRAd/@$_D-9g,1a~5UDl;7ܠ.P'*pQ:AGc7"w÷"ertODbdm X_?#Dv8:VS0^HK?&r~] m ڗ873]Ǡ ]Ⱦ<PUxڍH20- 'A`"&xzyMt|RskGup37pf,wGh(4ʴA\Zq~6or"-dE>mM1ë#fP_ȭeŽ4_H,E0*!\.NL`i#!:HPiR:>>0Ҏ0t]з\IT1T F[l~vt\Cӎv$;.OO98#pBXhz9';Jo0J0Ӑf:@%-7 Ix&ZEaCI-]2GB;)ݑ؂ V*/奪> +eӃC!8z> FI[:8Ic!>24))ϔ.O*c ^LU겼ǗFew{ 쮹ftvo`Ya,kك,P&m6)"v _|xfa?=*XpI 'N$^RA15^ӎhvv۰=M4n2k VlDxdGhiXm?K"ג4nIy6DGs.~RsWMQ FQMqM{@\.$?DnT"7Mg, 7&*#RE96򤈺v#G"OΔ5uQv-w}u OT:dBJ?vѨyt+O#EfU9/_cl͋sUM`c3&8$12ʿnawM+>dpRKiiOD48mB|/\>d;}&0'y3) (,,:P8!&gvD|0#E9Wn| c4TWhϛ͘dIz@ʔ"fS644@;B\)$5 B*BCHG˕Ow&@0@QEfHn`!b&$F+X ~)긂 (!DFI{ٷCfBM ab9XL,AWA.uSyk!΃9 L *DHb4-SWl9Ҧ|>xfj)N+*kni"{EWTL(ݸXe֒Puqq_/߿nDy힙? _o7=J\p~.o'=:beӳR#_ lV/e֔chU "H6UlqAۋ:E]*$<17ƞ?bwT4w;hc &6Z.u䝄 |jE1AINNCq?qS YmƁ-g%}"){c %c `dHuȖM;f!Hɑ :Zur"\{e ڂ|!)H P/gtyhY)Y9-:Z&'E](ĤjEy[^*H Ry R.5r*$\ڑ<hZ*cljYދ!67b"y۲lBQͿL[!'ca; `\$:WX;<\ l9qֆi|yudkYgޫmhNQ{—mu+{B3JXcSj!*@Q18;t4*j`jBP1Аc&'b`vD֌B[^l.j^;q8SC΂{ֻ O',HXs(AkUGQHk8! ~ ) ȑ_΋xv“ !h}~RZ mP,H}*o3`AE7Ez&$뎊6h(Gs3Vk&2Й %FT LK4PM2,q@bު#ihdqƷ3Rdx_0?n),oX` kp 0e#%qn'!7CC4]Xx k dJ[VX.Q5@PZ7JV( 8|H@0TgezgH?jQ'"VRVI+;mH-BLQ]U[œS_ d6 qR 6%+NEc† 6U8ty$@:Flz5 XMbuuNf놢 ^_n 1ROzdmD[wUuXt2i]xöz?~_zUwepOd4RaDbVogv}q0PVdVy1}]ގ>4h#ڕsl7w*48/F?Z}v+sW{7DoH=' c. j-!yWF?>s;&Lq!G0KS!cS1L{i@*wYdPb*I ?k4+ Z47c!yFHc]B,)%Z؄O"aK8 b|akuοaַ1L"y|or4XƤeRVn-Ȩ#t,"c/͇wG}/wՄ4+` X[Q)ڍlY\lU#?f WW[XolbT;I[AWՉ]k@pyvT-P1:pN&J!H: y=;"OzwGؖv ts5e\3_5ΓcgfmO;"`-;3_s=e]1CBNz6.*(x 382KH$NYHCk%jW;w˳twC*ӛatCcct4G"= ^mPp'7C,*$%Ioј`2sInk~tѸAH"/% v }qܿMo8GgDGb8/^q-Q`C-+/©[Ě4=9vmINQ @>(7=}VtR؅rt}&/S$A]`GK @6 $H,qA?%4)"=E5KŇ"̌2{tuUWWʸʈ-ht>ݘkl}It4ؔ=V"zoB vq>0e$S_/Uw<̦ٕtAQt, n B b8SnMT"zʓ˫WuU5auNG=d)hݣv ]/#Zg9<I 8(9kS]sw xVIOHoC,lPwګji!D+7-zIg2:@m+*@ޅR-G&\*_#_,TEW_:#E-r*&Έ-DI9F w-F= SoYgV/QZCN柧u~=n \SS]yHMZ"KFT&={'~vphE B@(KXБH.Som8zPR'oGM%S ,Zu f6,|KŸZ 5GhzކRLW޷āKq}.F䔓0{0S_Ҍ 2ݺɬjgTV&O'cl~wF+%JJѠE3b:/nF'9v,Im`V#߷%xUyWX'88>jA4\=;Hc4@+xM*ȼ.@43xa a@r!:[YW6Tͦ ͽ"9Iok^,Rxe mX,hYw8CE"U#ϭNxxdd|16d>NPs7! 㻋ǰR.uc0׉pⷒV-~sHG[kyIwL h6"|ubV )P:G.D|,vMYՅCw#@%T'J&%OvUդ’Z7i$=1 ܋y={5WG;Ǽݥ:4^O~B ѯ~X!|xZ _KF{MDN:- pno=H zcS]$ ܁j|[f6?}sv|xieGg`R J-E/W3O1rI@F(I%N]鴎]w W&v/8Ǖ J J>ᄆu~x&F&-+KҗVH7p._l8tek 8r4~B_귺~cC(iЂjЬzyC~|_xKRDy>-L. N-i[p=w`lvpst)>Sx؇#!nY ,&]dť q.XNSVgt4nCq,Зh>/*bN>} {=xAe/}/CI![AѪBqgBf8sh~%->*%U Q:~ $Ea_w 0 /i_h;JU{H@pJS/TH @GH;K8Hg/4VRO?F O.P;%npu~d-H3v3i5r3$;?Typ XM8½+A1y '4a 5oIr%RU]k#%{tа#O#0uH Z8l4TN]s=ư1ڎDrFd+ګjJՐ}~^WɒJ-Tv=%*j't"ٴ%]qUyfv>r]ծ~h^k yzxckioRZSP3Jl-_mT@~AbfY (,pǀ(MZ$8/\+쌜!s˨v U#( Zҡ ιeHJ(I9Ps$zä;g^;H<ZoTERк~Ez~pe@R?ͳ=ڨM1}8c.}D#(mĘmζR-yک–/,#5upPzkdzè3(Zɇ3}ܣi\n2YLV,<\|<a<11 1!D5Ў,duu HgX,m u5 >b9MP6\m}v=;~ XH<7HZIL}cDfX;XyoUfO:INP'ġSM0ϺQ@J\ZʿAy))!D9*2JD&jז^,v,yuA,4q{_lPÕibF97pDm޵[].X'kWMW9m}p&iyU<6-;o`Je)>8Rz߳Y.XyYa~Mf:tZSe|sgZ訒&l\eyƆчS}'ן'3U26sx ?^t_i.\lBo_wi]Vە\^2KViM$~/5ZՄ}#qR\Fѵ5kԽ[v7̪ˣ&&J<ƔM:X]=wG#y8;cfS$K@rKҕ4HrޤGGU'Ǣ|m;x?Gz3Ij"&܋l]u'ȉ("w4Eu4FfΝ-wWXN۲i}1Zg;Si=M{ *O('_8JόYժ?&wN~]K].ms?PV+%RՐQu.8"jTCzJ@N I3@bY2og5mW9zFQfvWq*#;UAx{8%uaTj!+:nLG}qnnM{j%t)HES~ %F[b$讽}^n DŽ6>^]8x mڳv>K["˔'jI{āsDd;=^q:uRvZT' VwA{ۉݾwwt$H[2ZrN" 54_}\icplmUɛ;R pHL@@2F\Fx2Niޤ-W@C 0z(9 t!@;'[E%eACԐ'd踆FJܒqkT53$Zzpę >NT z8CP`P>ԁcR1UWnKeO?v`%[LQFiUB&FΏ˩#SJc|$חӒHdhu 2=(Hq)Ts%tnpLj.G`j.'eOx109E|rc2&-d‚N!ڡNJW-Ւ24Iwf럳W=:KD->ߨZy1jIX ˎX<,'ޱGJ~u嘯_$Nztq&)%F1p;x<\1(ەX~2^PKs+}RRʕ² BS&-KMSunQUI߀‡WFZo YvV^ ڶ473f=лUSV ({d?DTD?l"ܦ$'\ +hMCb[n_xgLzcБwtAvl1rm^Q>y"蚻` ;pAT%z/L{WGΫ2ZHJ @83ݟq]Z]8 G]]GR$EK%"kc3v)[)[pMxmRcj"rjnePF`Yi.NV=5K  AGoGL0:@l$f.T_^ag !xSmJVJ"B"$B;s,z%a\*"ی&hm>_wƼ2jj E gd)sC$7 "_wت#{39Yd(%QD'%䐵?w@75Һ0KAw{>=3Ւ`;GOn8<*F씾]]7a6[a{N+T2Ow@xĽߑHJ7g,_67 &US$duSLΣbդRTs}2y.85ƿ+( ı!2QeRMX {cؒP%d gc{PȾ_!Bb/6bz= \yj_ܿq)P`L^D9VDYD0 k}F&+>` `Fo ̵^#)YZ[=:Lc%@&UwUbRL:Cj\QAʡ Dlc˳U W-60 kfL NfyD- (@e=R_"ô 6H~ C \SnҮ̜8VV ]뱾483Ry tV_ږ!(](ЕtU_=Pg$:腿։cAma>90VӰ.SU{oDEbo:GOs78'yF;wG{9kB!"P5k#9ׁ}3S<*,m[3=QL=x֔EE~k/W ǀjIs*j9[+b'*KEX2d^{ t%צ@ f \W W!Fr ^-*(ײt*)xs,Tn|Wn͑ޗք.|}l )T|RmO-)de=ex6g@w[GhXV"EEvjҺQBZ8w2-7 G\ a!"lC$ӵLVQ,5Gm"Z^{jZRߛF]$`B&}t.ApPT-Hintxt9?{B?%8=A}ו5SMkӹ.rѧ^짮;{˷ݛgnK(u[xO3*f+'+SJV侑_4|.P/؉`,RwĀz&11l|,"@)?J#W/<"+ࣈp"cjfځMZ_;~r(\mKj9U5-_ &W "Q2{/[Ok}hk}?zZ{^fbfT+hnd?bRoiުhs=}=TJ;aTI ߮4b {&zhm-&@T܈xLlH'sr = &ļ/@#A?a <. N76yYhFYatQyn@kI_9~sژZ]w/wmFU$* I?w:SWfFU1"eeTmkYugsT[Tz"ZC鍁k¦h;Yۘrc *k]E^v*܆=5@Ir(X UN'+rNd4ڧz5 pj^;y]N& T5WĮgEZl[ls8D9!N(7v^RJ͜6P7`5|$Gy<>(A_42XP8Ȏ<ӝѻGlIu!:^v`J@!̲kMR?rn+A^y鉘M^Qh3}V9:/W^z!VbNXZ뿬.Eepb !tD'}U`r3VnJVNfgբzVvGXcEJ` xS+U+$atdo~<ȝ*-$/,B2b4(0|H;=0pN P&.U2YNK/rQ Vn HV:c@Ġ@BSa Fjn%U=# T 2&Ĉɴ7;8Vp*ڸ;;&.5 NV7])ŧ PYcHkکͿ5Zi{H}4&YM" K" Se׌8sBjZ{zN<2ϱ';\lJP߻8j NO2%L@6J}8: "8YUU{zOf!ҴN/gggq\#,cL,m~\{T?>kjsj0R;sw"wj:TAcG4 e/7~y70T*Ia=dT}[F4 %fgӊN]UYھ>{I*Β"TeH) *g;>g *Ë/oZmWdL4 pqu3Adz~Yk.pN9 ,z-8g%|,)f i ] B=W߇(U_ ir{ІQNVU:W$rR׳볆\7zWj7Vmuu DZ޲(<:^"֩Fv6vv[t6Լ ~0WY%$uxNJNѳYq@Tp¢n:5MXZLi37`*@@]a5W6ڬ)Tlt׽F71S'Rn/oHsYyNpikKuI$p"[Eܱ$Ĝ^~Q&i[ZQj?ݏNPYH sɗRb@\a-#]Dڤ\!X౰VS`%;,6w-po/C0&c-f|Z82x6[Z&;ѯ`κ'Qg$"\OhqjY&x񺼡Mv꽟?oakRnDiu//k2BJQe VfT*,pƁV2ɔЛ6񢞇BpY4+5 4Mfg&5fnPDZbR9HfOضx\\ɑLALvR%E.%+Tz6IMͼ`.ނ+5bJ Y -SK=8Gi']U )AD4`2Qa-/9u\0:Ykrf_$ ۄAyj5kIjЉP%IEpey\%ak4.pۋM v_N08fhtAm1w$}4]0|ndV]ѭZ @rXHeGsl3;Y \+s/M<ћaFw_mD-2)-BI*տR _]=9ܗ#Joe0 R$'i&%}dS>Փo~O];ؙ WUB/t^Z*Iل@x,9hc 9$ HL#co|Jl|~7:?!GՑ ?Y's'v[σJ \܉bIDnC "|$%HaeC4W\~ҬPG@ y) Ok\潭RfA],hvJ֔QmDյĔs 48.\6 -iJh^:wq~t Jg~Luf=OfˣMZVp휎~zƬLAa4FxR0o#Gv=vDM@JK;anO觅fYÐqƺ+^AvKCU8ErNssHo=䩍HY2])E.ʯ!⵶|/ gzpy„emB"sE :>NRPS#* u#}F5_s]O?:ۮ`s"3wI6'n }R "d޵q$BR>"23a<=,s͘C>%Z%%M,6z62mRՕEFDfF|AJ'B6$@B5&yܤ|>\v'8F]uĪ<$_s1IR1įyn' 2g" ;`-G\T;=OQzB '#p AZvyh-6CWO1͍jfCEY7{߾C~tO[;A8P+)J. Ē U|8γ v6wݝg^Lr$R8}XFXiOw}] $4fK y@&R5}۷WO,HV3}ZPUaV-,( !K̕ڡ|*6TQ08[1l5{#9{0F.3\Ta͒7B޼s>OC^<&Z QFhQC FWҀjO-ʑEmb}Xpznֽo@ GPT"! Nj`5/.bvF)@z}?0p'ڼdYYT[uYpqp猍È'8_wLvd~ϲ8S7 nq7uM|f93oZSy %kN9'R&T㥼^Es柆cd#]bfԁz};bîC[?'$_/}n8>^!*X8) HoKRwn78{čүjY/"^.rW;ờ/&9buL jɘU: a`f~™fW{#*nǓ3>|fts- wFJ.^r`9lF3;'SQl30G7 }7*G^ ~~rQ~ȏŏÇʿS?79#:ZWWocHWmiOje8Wv> $H>W¹+ȍ9M+?;iIިui.:3[z5Hl+V(7C\ZpC]w駱bKT1 Vv)Ja u^S :A^c`];fa;+i֣ܼ`$5ivGi4P g'UO<~PޯΏkvcrp>  5' n ަ2-Ҁv`(V Ww'(P[/?5m^'mћۛȡfb#=(ֺ1`b$ .Fq#NNy:mp`lPZo#s0e?C8j6PUrќJ8Vjݒj8[/BRuGne6-MGkxQd&:_:};ZanĊ"tP 2hH%;SmϗMqxr #j"%z  Iȇ@A^Jʘ95:/xgy9SC%M=w=Lt:X}q9GA$k\Fa2ItuAkX 8F˫V:S^Uyo.{_z@-p'ꦛܒ)RZe$i*^X"-kphb`MFJs(<~S Q'|i8wiyVnjFSÇ`@ g7yp16_+}؃[{ =st p~Ѷײ}΂ $(]AYRgL!K~'z'Dpd~Z!`0|-(Yvd\k5L=wƈk-VDvFV{Fj=eXAZcˠUҡUbgdт4SF-19})ӛ.;=а{8sERS&f§ySCCh:5;JF||*rzBHBB=;VCn\=MBsB=`=@h֖ =3p9:YD*9Q&sDE$!*1of6y{ղ(oմz#rRBP[Udl88UL1$? w ɳi7W1 ŧZ|NllquywG6C_^ZY=}##y-]]eeg7ʹ+LceO|+{oE3iG㿂(xč) ˾\{}`5/K5h8ujVt/14e6;8L)X4׊I&^KǐGYM -`2kO`_olNG!TVM>- k2s׎p#/;`e'AL]@}Jv={0eH{ FI8ٙQF-:@1@P,I19HB>Y(oiYp`X%nk]Zj!s\zw%1EU5wl>MD-Q vRt: N8G.uFD"Kb^ 9aOL{L-/e5]wU:YT#r1_T.xH}(K/UOᦿݖuTng$˫U#aqq$Y|F%UKrtC= '匌򘂮IkiK&Bg.+w!2RKt;}Q$9U(JHt$HVΒ_zRvg7f6aĿ: |5"xktP \p\.luJ "2YIJ%'Uh"/(0rjp3ᬘrYCC-[!۝Haݴ:LK09}Zq> >V_-, >|[;ii'ii'Hbח%"Fe]Nbdjc;k0,k8A 55M0Tm@%3b f,7>DW'L欽RF m x' 'x[,;< ju`Ʈicf0zF\ԡZi^Ǽ@!YEmg&+lP(m̵7 *H>jE,Rp,2f:d&QZE $b,Q$#+`0Ԏ!ylF$7YmֿYdp$4l_2z*N7ox;FVp';PeOcƸ)q$c KZAtVC6i71~ZZ&ּ4AĶn*H3K%@u4[,].|Ϟ76 3ZǍ[([㡞>wqo8o+֭>L~?hƎUţ4j Pf4%.Ȏyg3$~S{BvbZ !/θq8ucW  XNIbۻm(l]"*]DRRLV9k(R.`ȥGJ# [qH+u"vmHv0{RÅF#b ~ܷ!)Δ7O!xXX0I(y"FW'X-iD+fژ~~BuzM ʼVisMV5e΢_)e_sA2qRJb:k <*QgmL[rG$ [)!{B*>u}/\Y'FJS 'n![ A" qU[G3!GtV2e#^xȁ'.7H[8BT:| !Jy:?6.п}=&Rc|HV$谒Hj[Vnm[XĤxtPrO{c[更נڝV@H'ݒl.$`Qm٣55CA1I+MR<禍&- =kErʄ S]%fK0zC* zQ` Qx,u(Z[i$KJvP/ZVyhQJ2d2X#Od,H{ p5(3=ޔqY9frfl4d#?\VD{ףo,c7}m; 9nv}kܿ7gLEY%$HiU;v"CSODe@?;@./c7R$4gPmޜV)&[ɸd [bn/<8߸MƓcRs;f-buonJc7N*W2@.P9!sRNu{h]ۆ7ֽ;T [MS4Xk^JȢ4w)"=#Z7%mXV'.zdVAmٷ y4 :A졳@S}qż- 9 A 39ZcK[95HNq3iѡYҽ-rGh3yh-W5(˟x}ۼ鴅ݐjxJPq` B|׵*ZP<=lPDdMŹd8dC2Sj(N _"IPv ;Y5X&?zct^_T :?p##njbpzԐ/YJRBa\KnT=I$2davԤFs'O ΐF[nxyP^m<\yh12glCmOi}1c42±(Q 붸E=hίrxcv2[Oh+sC{Z.iڳ6j鶳*EefHky-SN cGeD"-qwKo+M8N:gf(7un&-Isܬ{9+cMJjUShywnё ]K:m#/}R͡6ܵ"ܹN͠9 YIgtٺY _g6[XRH~۳-:"粝6(!bYGIoW}Wu3^j+D>r">:W%66hb6@~wJPvrTp__~z""S2.n }ztjOiMo4Z"J)?3?xNV#Rkw$٧|dc`JbAwE=ZhքQS LɌ, ň=肢D{(B C*!\I@>X':K*N(9I&f5e+m5ɿ@z^M$JywMN Qp`lq~՟DIF8@I7$9a_® ^?-נD[":FƧ)^W Dh(AMQpAC@ E豗}$fps&IuߙI>d"7 `!F#ejNkkw;c1vTUOks)X; KKY*wrEKZ1ƺN.$36SyCχ,-^&r0Eٔ?aWKpbSp [u!y $H U4Nc.Q "oܱU'm|)ܧ)@!(b_ܮ3A9 ggVcR4}Pk( g( @߿gb Miuɮ>89}_) I DqsGZ1b1hE4YZx V PVeroC*JઘC'?N*!KZ"}Q>*1w"!oݡIj8Y|~}I'@Yr!ZawX>mr(hƚ&MiT_؜f%1%dfr*7!t$>QEg[|*A ϐ6AR]1-PxStEy89]ED}yMƖEĥ9 i6Iں{ba1_rO_z֞7.!~??9DӽCRprF6ͼ!GK`Վl8P꧋ B&f9!1N+jS2EyIFБ\- VĹ&1\m' ft%fZixM"`ؑGq۠5kj"J 1"/T8Yg0[k}*/]_\ QْpilC&s YLe:T6e&U!Lf8VC_[ %NgU'BTx6 z%$ΫX뫺G uz~yDN" j)ȗC c@z`o>n6_5X߯~G$Lmȵc*uo?O'㿇nCoΟP;1W\?opc&$֩J(I$RR6=)8iV9M[ wX#"#^#۹RgQlJqqv ~|͎gL"^ LYI1m:]rMu֚J\Dvq7!DLFCr*.%P)vtVك!<}W`Η{}W ]|FvN>lٹlw?9Y]nq K8pc[-3@35w-Ĭuʺ9wc:js? KD넴$8ʩ|TC)xoJgie ?^"Zs=0F Z i\L!~:55d(K`& 'ڳFLFmZҡ]R։8#\nc:^ ($> WWӸlb.,2$Y*s=7(#A1bhB1O9 Wrdؿ)Ώ8$!8zV~Sc v.{H.7[ɞ;nd[ɤ#V>UZBQeW C z|hy"Ź֯lJ3{T7_ߜslYMhT2; .'ɃnK#kϿ( 6s!:4f0?8yx a kskZ:)쌘i3#!Fc~9dRo{+s"[k91a ^;qG2zH-TE dW0qX!h͸∊ѕ,(%Eѐr0^Ӻ8#0@ '/O`w>]f> Q~؂Kw\@EH\G1EءY(*n)PM^XV&ǘPS6\RPD B&j⎢y$@s|?톩-"ɡ\^ٱlM!^d^"_AZx VƬS0s]Aﱲզ嗤mjbK@gb,+bAjG-S(m)KHy rejetQÆdK*K|5Y`$-$q"VĦץj8RnA<$zփ3^X\Z'VT$׳~-"SHa)8{u (wcRd͠Rz'l@ijĘX'2WnZDWCuRˀZ'Muu RI{Ő(A|!HXq]W Fu:ޜ *'YI.>S$PQNJcY69cJ>{:u,@ְJ cMLnnO0(!mh,ΞB}:\@Zp bejލ {wͦ/Eӿs9PlNrǎY.hh$RϴZ6Fl 5$+8 \ϸvD-;^v3nپ1gޘ36:f6}ef7ʗM;/2% U!tn<@"pf(|7¦V\ɤ\ 6cZ<*ႤPW9r0Uu|.'fu<XO!(y1aD@`1s:ch֐( x%t߭,=XJ%<8E/=rhw>Y>D/x|hs utxk/ڼT s|&uyysa6=b.VH>s$)4yq'vhٙ\q:\6xH6_ߴjO*^;j/]Jw"(XJrE`WO?W(&ǜ|_M!> [LFNqtpl)*hQk]9hLI4K\s g:h;Txq9ZRdMhE^n߭fAXbK߻ kɹ$3#?E^Hh}5̪+[FaA$^`_'lXQ"TLImowzg[7tG蝵 C (E7$Uڢn%ˆQ N^&Low4O"Fl]zUT%]O'iܨHV[J35Eq#AH_i(=nivm$'+lQJ2_"悅Erwo:Néz Д[ Ly,H0_Q 07LL #XyeP}T*HY 8+%vF].s8ܾl@E\33$FA%su~uk¥yz(^|`eU>w)m^(_r7)+yOs:_w7Nw'䛳|?=? '8XyG0bj$?mt@^!k4 #5B嗛sqD[{MJsS&F 8ᡵ (½t/=~',"kxvo YD#H*q805gy\iTfM+m%l;%,}T5a.Fyer+x{֑`yLi12/kB[l|YjJ(Rsxx`86M.]}u٣'^[gL0&cX[TsINFR DNžehRI-$djgxrzZw@zqcgm @]h`$n&H֮-vC:_/O11Fkp“4oeguOH')|/iM`(kһGe5 9`BE,۴USe_|/FFBMʘt g{V͒q!ԏB>?7[9q9=' .E _dc|B{DznV:XZd&j dK%0$ u^]hG}.s>[XM"ؠ>}y]sbj1~oU@oDaT)[Y\FU`#HjȾsLq mmgs]Pxf[n"Rn֚|J:`a b㪎RI67!o0VGw4rۡnjpv0c1{gsfj=$,ܬ.z(شZu[*!oǜes_/#$M wyP@0dc9[@0eHzؒ+915e߽!yo=II\*@LA1RʄբGPX.QS RېFʟJZ0I#[^iN]*ke5o-23|V@qW?>Ra?`do`&}ąUK#TOt%`Q C 6@N5ܜ܀D4ƌͰ+^u$'TCY v :8<V4zd-/ԟt .VتOO1ћjWL(۪Gc&5І<;Sj|lH+_̵ VJ-hTD 8Fi{<4@> iс oJ|.s/y!윯 ixg;uk+>$͉O M5Wm.h%Z-LzV𡠍Bu2\t"zBb!OYvMuߤdsDMt"$U)j{@mZDV'S׉SD@_̓G7gw(fIY,F;ˇo7 QB Y $A,A`MmO&J<#j[g&um=Op>p,'$gFs{N~1}iM Jj UJBZr?5K9 9eYMMX4C~k;a8a=cJy>l-mznF;Y 5پ HБUɨcՑ>y'C[S-u薵oD[5$dLؒk Y9hMSn>S˫ fC%`Fhd?j3QƠL\5kg}j{61lCr8yѦ7±$_T' 9'e8O~3I LQ>~y**8r6{G*TC-!Q(kسR-6CA,oA C/)H}~{wq[U6QgDt Rʼnd#;PPBi);z(TtR<9Kay-`" 6eL ev!~#elDV*hZQ- y-N@ca/`rW8\av3޻I7:sK;,̾CN/\wZ_qY{:ܛ&DΟYsӪnZtwcU[b:=;ie&0,>"v|jcV6Z5g.pF/uiZO0 ޞs}b³zalaǬ#<@s"v|n$r5Vo+zzu=J|bA^O&ݼgQ52ƛfm d-?s?kHJ^9>A;@H2[[=km>`l}[;s:f@wl 9f{;rr|SZ\|P+n=>@ʝsa{4,*,)%2"k*Σg{m9  m *8g:#54ofHkFvsd̘xbs$?Ƶ e.-poK/&(3%c,15:e@`8tۘݭԌPښd5 * gwy<4Q8{?TxW9nߵC|h}gܸAP9 *F?b@Yoq95Zr~3>Fy} "ZKf;hZ=0ٙ* 4 F6ki t0"燻ߪǗ.qO65O=6=hݗ\^[#8h4n/|~P|\54~xq6C*٧@j~>j}Os䪻yӔ8J~ԁ6Xc(Ǖ$"bTbW`,x*_īBX}EO[ BuJ* @%sacz?8fRzcz7d`o.W_Y:߈<]́1j'nyHCUm=M`z/luKCH"\TKPV UQLh$ $,2NS. >ZջgI[^=]S&,ZN.l6 O1D)*,?TՀŕbs!/. M$}j+}}}4 4J팇#]˫ pf|pmn(׷W[ k Na|T /0nbqaGXAv5Z3L.Zf}8Y(;'/y>:/eh B_JeA eeVlwzèCq3;cӿ'87ڢ⽹qknK+zsK)N5y6ybJPB +Yq9QG_}w25(!`bD>͍T?ޱVڟ{ęn~M6BWC>i7lwpWڳrxlG5_%٭*[d^T?YN>/?$K,bʀm! |[v`fX\?/ח$zrzUe=_.$~WƝX{bh2UI>m.ex\Wo}8jq=d՜I=Q~֐eO2|D7a$~[~p./qO2G-2Ebwcr$o`"l`LBUiܮglHZ^¼}#t)!SC7]l.s*Ƚp"*-@-߾j :p찒+jEVk-aWd4bK947@3yxn%EqZ$[f+ZLUU#d㜃0)E6xϫ lES8>=B͛L${#EJb|27PڜRa/Y\Kt!2 7F;/Zb &0|m hQlltզ@VEXrBmTLڢK91Y3Ki*(uj5N(sD)lHP|O3"%)EsHA3]+T\ԕ[PR| 87{I0 g׺??&@} L8𜩶agf́ xu7Ƃ+,ntG┵`4pXu:i{8oPِ]WaL"8:R]8m*V HG1JLu `xMjzC"7P"Zh3{<_&uאZ!'B7c歎]YsG+܇YǸj"fe;kH? F"$ѐ,i}WFAPC;lDGeeVe~S9kh@`|eQL@+KȚ4]7t0\9g{NܗیRR76~F溪KW4?nr+J$볶( aS +׋ i.wFK[h昪-cxvjyF9sZB8<86~2Zo@SQmj B矀1cZ}$B!ذm{>$` 5?nA,9Jbo]e&h&+d엫+4A2R`V_ybpBXfq%p0%ˎ'IJvi`DA=h *rFU0CJn>Cڌ?6{pBPf@]t ?N:N?7Z%gNJި67 SD:"hrfQwq>fclK+,w~9 8bTp+Pv> ʌHgNcq&9/k[&q<=FHk$)Nw:dKK&1~kOLuHA 0WtW NF4H;욠.' ~ }Zf윒K$p͔vak?`bN< O&kOnjP%>L/)6{@Xv TBz*zr~)cDv6z8ai"yG:^vU7}-Rs!i^w! $?p59X\\kh7^?V:'iͥ%b=Ře-rIݧíqP{0QZNST>LՍ-e2 k)4)MMUiY )z|BLD8w闄D`wgȮd/2pM&ve@@7f/S_7e/h<'>Ǔ7x%3h*4C"5ޤ1,Ar/T8R1H0$"J/kd& t = >e c2cA{"Gʉ(GKHħ,yJGϟ:AZP 흭M q&{.*򓐿R!9a^fCم[p8Ʌ1ntn9,B!O}YӁGF:}5Pn0NsBY3;}9J)w*bF &~g`Wsaa9|՗_T_~˯:Vlq 08tlQSe\Fw3cP,d?'j^"B)ˑ!ھL!00i?wf㛰&[|E;hMm=0ê#ڼ[r$d"~Zë'-mDqʢbNp, _ޯ։ID_d %>U3J|RO)~'Mk"!Nzʇ(b~?ȶI%gz)rJp&`YCbWl[UfW=M21^̠P˯zMˉ_,yCf%~1d/襓%jp1 ivf n)EJ.?eFL_(F_.$I){|]Eܮ ^p>`>o r>狯pQ38w+qɽɁ$8#j?:fW=G|xuyɤ%) Sw^si,rȄ#;DP$eI3 *ш1V{L{e\Q$\ol{n_㻢߇zL_>䬵A*]]M֎Zct? ގj'~kdD.Y/+G%>NN3tugʦұLNcZ3ϙXpSE5g^ O/M%*ǕБnO.%9W,Q|$4lJ;5r6X!•X!W^], y>vfv}ٛwiB둅۳DtfFq>Vݜp ҙ4D .ԅf|s0's ]2)ٚi'b߉$Qbd,wN6?>[eG;E{'[< S̠>iQg È3ɇrP]l 'v0F~lKlSKiDn5h,-yjn.z[BݦFugZ ¡ԟ ;*ي+;ug!)aA2HNYi̬Ҍ8pD 2ZP)cj!ψ:&#' N=4k$?[9PwYnS-V2|[˥q1q݆ϓ"۠e@A4N_絸oky0_(T _*{~ QNWb6xRRb)ENå.Tٞ@q:|p 6y+!ZxF33uL[af# Q'#׹^~9=-?*a&M5[WGk iL)AoXu:I#( <Z Жu0Lvs7@Z╿{ ݍқMd93mX- HX(auWbGm,TTºH(xp"h3`T #4 S Gm:|EXW Zz4( !8+@B( wl;nSChWZQV-{:zx@8 PD$"!`009Z9m-p;2JK۔-า'xQaqw*a~^ )l6RyiԹhݟ8f[hwQds"gm8Lshv.AM,uJBA*)-ьR22jiGA\lDRyDnQ*)^Y(1*hn\,GFubqp#tYb`95X *X'fFKB07 ep]0ڎXTJgG5WT)|fKg&(=.O{(5!#A-@[?y1he 2 %|j62bAZZJ㈑ 1j4a"H>f~vFT30`oC-xGٷgk]1fRp% yfLH֣p H%| 6!{6L].7jrs}QFR -,<[9 s We-HMHk6ΥZIAs!F c1 |OLZUL]ȍ@GNe]uLSkf.&~jӦz`q+MrJ9Dzn;Ǵ% *MJX4SGiM>QbJ]Ȱᘺ(Pzs0`x 1 PC{블,Hucf:`#Ǡ@ӕ֌}`]sXIqՙ0I=pWɗWQL^_; $$dy^ .B skMEgQ`GFx*Jp MK7^ܝ=t3nM1σR-!c[{:U&R75 K(& C{} x#YRh#Ajd$l`=zAnZ_B,J^=1Kd@!0Vq!O,,'_qP{.PĜ8ln+a N\x@E6$n1Y^;[ l*3&4ݣVa8r,zLVM}p!c8Mb ;$ugaZE휋, ' 7@? K5YD Y%"Ɂ2>VJ=R)I9,sAs.0 Wd{dq}Itt/iDr `J-Iʖ՗4`)#p4o̔חd -ien,Ӣk} ދwKZݴʺpMjfqSb3xSH!Ғ#6޺+_-~_N?uU[+u8Wyr;R!̃5r?1yo9My 8C -nҲvŗni-ܲ겹lOѮ$m rը}0 "x`W9i",r 6zkrw7g)' TAg]%_˙*9SE3Us#H8hV!Nf9(kXP$:`z.\@7;z_}[_zыV_bgaes+nEg.h\pg2'#Q)}I^ʀW bH!7Skeu,Jjr/4wNԹ1bD>DdBD )ْ>" 0}$|Wl-5S${Ks#/# pait^p\Lw@ ρej_bZy)1ԉ֯ޢUF)O˗D_(P<ɘLA;iFQQGp)m#u\ه&@vO^-/,՞o|*+yG?0FyM[I=)4X$쭷9oegvy{ -2lIwJ9e~jN`z9ݴ4|lm ajnӓkq7iN*k ﹁˘'6Nد?77 hGvrպPjU'vF'qXFV 'pȷ]մ ѻ?F=\h/1oo{G+͟Y=8Fuҟ>,yۛ0 8R fT$LH(Lc֍q.h厵 Yo_o=`R0I%o%s!z/&,l:}rjQp|l>J2P)S{Ck[9.>M=adD_c?9oߺ]וB缾ŀ1\_&\tV54ǴI-*h>";, yn8PH/{ <-UƧ4ŖZ9oeoHIW2ڤ/#" 2GJl*'U-vE'H&D\OP Iј=x̜iV׿-3z @et,^M$BF_=|g B~uhBe'qVrucPv)©ZZ&i@30Ϗך y!oaj@ȥ!RNYT8ï% I*IYh[Q&zߠDIZڦ$s-&.KZ+`<A=QzǣH[&A:j~Q-Y%DM$S dT@xM̱߳:PrjPmN.^_Zonhor?KoMs~?\e1%}Oeӳ]Kbys߸/~_eFS0G@ÿcNA3@lH^jN7#mR>Z<2Bޛ\'!(x]@8Z"7S繄NFΣpBp+1h %b݅"'tXn8|(F }\S9PKCg@LxOg \-d6d-*e=шFߐ5E0Cgd.cؑ3>OV-(.l~D|[ZRs)p̫MR7 tZ_oLdHMF>J;k6EC[ yuPQf%3N͞Ub-ɪc>(NkitL&g賴vV$PiIs *WvnknO*fM8K䔬w\_,;wv|K${!36=;Udn>SF ߇):WZR 9qya.5tfmј$gb.Dd^;g(ngk@MG#8C$Y!dF9F&JGYLi#Ӑg'X"V'CJ:6~iVx_/pPco__ g]QlpWw;es1źxlNOsbQʀ]6\;LHrL<}c8z4rtac"B!8i멷{&Pa\6ċ|ꖗSi\N CVeqK 0yY5u#u >FeK-re Jܪ6|Dm+rwq5h|bak:ss\A^NWXyyeHI)lfɖj"D[KS0*E|ty ia$6<*vbCv\c'` p]0dE|,- +& mioa|v:e kY OH_@ePod FǐOuYL2?;3ug_Qg_vqX[&TD׍ Dn`ȸ[I7Z5Z G$jGَ^P\Vڽ@=p^7/;$V7Eo5]wU} o#wPXʈK9~ OOn.v1H!@'jݥ`R;1r׈K>q`/0?{dD2"`%s,{afHS$ yDڊd ВPdk0"-)w?V:1}ALIeAg K-411 *Q((!>{ztVIOYء=Jv鬔3lN ݂=0覫 ۛ:*OޗnPA}ś!5z!"KJ*YWTe FYHj7qٵnJW, ]T}"d&< +Q9zqǁ.JW*Ԃ3cV=@sÃ{wޏIY]˪UɅ1٬YMy/.xc6&OJ>J>J>J>o+y򚂐1$20$PPdWK#0*g U2Kf?f~o^QPtuОz>[K7Mo:)y)2[θ?,z? ;:1/9hk)L[´WRwlզaq.j{}rI0ja&KqXZX/f V$u{n7҅seCy\΃ eIGpܳ\+hx*[W>ˬX2=^/o^_tpbbVԪW>PqL q=N5ko #c_s6ݘ=nQfx\i|+_s/]Ɔ\l%G.y*iBp?oV<} +%g%g%g%.Ithz0Z"#7+m# sJGQ [^$@iiE2II辶N{)PӴkJnWU]GwUuka/F` oDk Ci_3=dJǴuFrOX햕b`-A)Jr]uPZR٧Hf!Jƪ]F-џ@r;B?;MC?-_~Hg(P {VNӏrv͹Ez T \,e~ኣ`V.W/>AcX_ԇ%[uk)7O>6Dÿ_7{Ǯͺ|Xlo6xergxG~oO?[ݰM}b=}hbץl޹%W3j5D;?B[~|${Ikbp$ | `A(1$}{g~x*s3k 9`㒩~B4-9<2Cr̩$S۟"A'tLJiT|DŽ)d,Lu\]rtT/KLn}99._S2du=}baZ|]Yomi:rI}R晴0+tL\gd4u9owycݲvs\j-lul v.2gС;"B grh2IR*XcMK딦 /d "E)pid):/fpb3a9^?-2L 7-//[suS*LDlЅZ)d'7 ke*ze8\j~8\18oζM&3*VD>zf-[< #r{WEӈx&=L1#;LY"H69A` |HR$O0wuĀnrE.)̆N6y1^l>??X9CB9L H eælo9yr/mО_= H T@P3U"SKoA Rq1cLRƚZ`DžԺV6jm: Z`iwdJ9.25*$v$ka^tՃJ f"+NgГɬН צ:G;Ye]ڙr?/]@\ל:]e5b,2\HD]yVgӭmԑ0T] 8LXcbX&:&:z/r:.5]^.#v҈l;P`:TBpsi U޴YN]FГTBtK0VgGRYOu.M^tlr[n#Ny$JhEFYPM1Q[ޞj.z]ik%r &o-_.(UƂ8M[@̬W胲4:ۜ{;7}$$vA eW3R: ]ug?-r!ea8'vBߎ0mٙ*=\DK3R Mx\[µ.̆۬̂:9&tRܺ^QL=CuRH#ji<)`79l{ sDo-?p.̌GD|XE+$:u:+}?U'6p|]hP'!Ti|ћl:0zhޖEi_u>\~qY@]JpJ[AJ M<2Ԛ]cQzu6?3D1SƀEU=v U_:&6cE:kmMkQK0 ruorzi,&{' i]y TES@ Z {>Y ?ܙzD@"iv&jBOxGfykRk"sw3r{qUƫdAT`mOXՆ`*ן!Xc؏|L X}C-u* Ejkmc%8O)nd,0F]m";,ٱE[ĖHEG5Mj5cDGh݁NB">bgZp6amvI~~$r{!~8]/ f~]ޯfB6_c ~ .lo'>t"=|?ˆ#'X_Zd\(~PeiܮTK~u;O sƯ5Iz0'>'F>B-RIoZy*}"xnf )T\)"ul3=COb)GS-b1jZE5lY~574K_|jBbCyj!ZlQc KfJ"(LitHP%EQ,}Ԁ$[B,L;&r{{Na₉cx17/:F1X. K&n({ ˭۱A R\yF9 Q`l/߄0:fT\J)"X}$smVBIU8m˳!&;*V{{+*n*H`K]O}gyRR\mJ+29|Г$hqaL ծb9}=dK+:* [ 3v ;O$Mw~_B |SAQ$LHNA!U}؋ Uh&O)8rxU?3`Lxh˸<O ?ޜȡDUY=RVd+9X&ؽ޾`׋y~%KϬVvCkЌ"ʤJS2>_)a(a3׍:[*^>G~Z&lS);j#uz ?G'hP!eq,/ >  KSLQIO=чBX/籤ݶ%h%'L sC@Y]U*UmTS?vkyqjz (T?--"Iy >L~w~oG{* = Wmz.$Ӓ5Dm+<4HuC"}u ݂.mթdޘ&w*KjOHR\xGORhvgG|izE$ mn*C^D?$Fioǀ&u#:qDQ})Ͼ wWRhe }^4`VӞ1^O` E-\N$#P*$n1̖tU8`z~PW*w3BE a9.BŎ2[ M Q/hտ 0ؽ{}89o#\%W>UrJPݍ !4JS6sK}g\]ЪfNJjpY_[tqa') BZu8jbVߛG6haSDVnά)9ScJJyB ńl#4&h|mjԌncݛ 5Xckް-G [ɑ`%"`G<5:MO1,;R PTI$50jriKQhhl1"v/wO:纼o!%rv{5愪YbIZQOd )oLZ΢Fr^|JT>ނ[y{fk6e4QC7qypB_%OTI?j4MmVZԬ;tk7[ rp:-z:< qt:_dlb1$5a^ymo8bzh\*7 u ,.J'y"M6y- .5(.[g/.M^:R$ '~qy:h5ŝ!-jrܒރν 4ɜ~qD~QLDv' .=}!:jbة`Z6{yѢpX_D!/ f-ԜCDA'd ZN9&_k w ~-HZh&L67Wapy}1ߙXRв&rg*^BЏ% tBVO{7w(&H[۬`މdD0\k1XQS'_ʹס֊{3JIR>nsRnFcV3-#--`]&DWeybRPS b =VD\p]"/Blz,JIV86)kW~3Eԋsb_o7vāSɔ/Gv%"%ƨb/hue5D ;+ކ8xoTɛM,̏AiJ\,+''wP{moڛJU>0iW2َ;Ef53wbU7V+&̱HPޟ  cdjO 0{qaO DYۍ;;2ڱܝmI9ζ9֍c3uǹ·Կ|m_x?8݄eqk[N1oܐ%Q$U,Kl7Iv֪fʄf߻Vy$v[$N%כg!`AעlTk'k#ZZ c~tXd`$5AU knbl$Gfcn6̾j,ΞO*mYz1<}Z2!6\ Ucے5[7 KM =jT PXe,U3ĸ7F2:Dyhj ixX<8)ț+%VR>#y40|ݡW#@;lGUoCj4|ūIbB-DIXI5It 3xX}㧺#Lłk_{ Yw\P] [S"ڥӺfDz–xV }m鹶&IBҀo.&Q]p(Jx#&2@,^ mSBu}o(llN[zs1ώugI;w㞺-2f/v}2≮= l/h%<$=M?gS񳍊mQr_}=NOslؖKuA+r,̪`D1c/pFUliYQ'Vcq9O4ܻIp}Q,Tzy-q9f>/#lc"=oPa Zݟ kYQWJzŜbWN mw0'v>NU8Rz$>6Ex~ C!ĶEJ@ʯS-/~rQsG VB WBm#(&XAn8]8FO(>DFc^uF<{Ў!xuSE߇eH{** (IQ4N;Mݏ@"xVc|mu9Qti 9؅Y\E] ^9y+mVkvё Eg,7.>2 Y*j$~:&$ r)~o]{ N$ĉ1cAo1S'?}Bx7G.|,][sF+,=Ż53=Wo!R6Y8/ X"e^$;O@QMREU٢H)TQG9ϼ2YNl⏓R˄qrr3j 32V1qnmetHk*rt'@14'#]ަ8 y6"%bm(Q%~\8CU!BmSBIOipHb)Iэ{) Jfn惏0,q `k:@1Φj3Q|!G@ T]$`Q#Ȗ^="-yPE5Q[B^KJN-5WCFR$ 0Q뭢)\=bd*V&O:Aq(Q8GG (F{Dp͢a|а^:qjBmOŘuzw+?'w˓{ƗYyy!fjKe6'7JM'G;I欄>Vb~Uq!h+G8( Zp#|(G>?񯓎 J3{ce৹4_0Ru/ ?#N"͑}\Fw]=x#(cOpnJQFYptóQfQaN$Jc[ο$ P˽  @h '_ej[(|E$߹:4QAJG1^ϩ^PH ~YE(g3PO`4 Rxg(KeZf44 X`ol@p4؇hLtY{>_k%ZzZ 4 JaEo:pS 8L~Џo;q1FIhB~ЮT8S?; i|:F8FSt̂gVx‚МiC^\pbS2D$3ʱdk+|w0Iw\&R&jWзYqΔH2-|Tl-% .b5NE?.ge)R 'wظH߆H2OJȪ"͔<bh0/HN@ZARc(*HҎ2 í +5UO/cʾHILOQ4#ʽ)4EʌT)N$n"B;3)ޤɌHp1Lf)i!H?VQ`ps1q|*F89J l4Mi&:?NKw(Wn11. ՍHFtpN25I  g*byp#JӠh׋m|\5QdfZ+gѧWGCeCbt`+2S&O/7YQn2FATQ}Պgq]MKε cT VZ9j!j_-LR^گ~L-]G"բzRVqM1^eeU\SQ0l;jAki5EV R*ZZmMqFT534^h:*訥TRǥL-妚T{#R vR *hRPⲞ-3Bx2}xP ^J9l񺨥Ԉz Gyl Ji)%]T>ё*f(PY14 9,3 C zN$5QkUf3PSJ!z|ʀK"Yu.TMǓ"d|!o:-cnn'vUv17@doLZl`Q/d߫"KDtQ$r/||PNRA:H+jJ$-7_O:td˓9~ߚb U\\8nq-!8YLLx/ ,"R-lsc*e׿mE-LEjMlo%z(徤Lv)6WyH9<UF:֋m]ԥRZSG[!z}Ӂˆ"ŇDtຝ-vۘ ,ȣ6Y('8䑄0U 'oWo2rfo0p(QZ$MbݻAGc3 ď᩿|N\tZEɲ]cO'Wߢ/h5~ng V98P. GF#mj4AHd_|;Qͻrqbpҽ|-"0l) !)_%6 } Y1V4ɜv Ƈ%Z43!R>fbx\Yi1)>/ǽ8rŒ=YF?4[&ѽ}Xli^`\Z.Qj,_bbo~_+J|?)VۼT_\0 F&$8x b )nofX*cv.S_u,.jR}񶒀V<[V;a⮇P苻(VKޜ}_&SRʦ9z>-ԗ!lK8:ÊzK8zFzDGuK=㘨lo)FHhHr!NȍTGV+f U69]El4k܇b,74 ,suhIe8t3BV փ_lVz& Vk +3}·!(jC#;PǺNQy"(6Ƚc+M{ŇGB\f(]P!"rUC: ?zԾ}@fųJrE7!lccZN}dFiAs3(Le$:5$VG4)(e6" uTA}¢*MuDǎÃsuxP $Ãm]tu!*|D:#)^3kIʨN)c63\yp3qVE 4X9qesr7Cfsդ7wwz ]0#NFsv~vIA>Ij~_ES}u,BV? t B4f_/NSow^Sa't1Ln Yp)q  1DV``8nY^A2"CβxAWx2랰Ec+λr.Vs 7PF %~?NEEk `fLí}YGCȁrZOWWf_1KRfI.^zVrSJ Z߉Z-]}j;WdRmD-D$+exΥH͒d^byk 98;)J^w=|ڹ|{.Q=zm7aGjCh6X V7^c4jr ]B@=Βj r)M[4>ՀjXmw oZrmCRyAP:#0q_<;ۻq_1KeY$kp/',No lڃl!3juj%L 3boWGWTGB)*17.mW|?kp7>s{KnØNfRe_?S.Pߚ|4tw?Nѩ_۟(:L^r!lJt헎ۀWV6O/~j澣Y& y&dSdNz7*Rec:fN&ng-qM) x7،-U11mMi-&һ尐7nGKl3Ɲ:Ba/jgJG2C݇QK@=L>$ҜUTdg\hέZCj.5UgnqV,JtM5 pGRAT8*<||'vV[[MN, ]Z'faJ4ή/&Nۊ ĮZ Ũ{YRTs*% RDut`O3B: OG͚}i+uVhJxxeaugaPaPru!:(E [$EjG)gO9K}Đ6D^V/w M) IdeFNX&=JZBEaMzA,X"E6T+l {S8Jb!{i/Dvr"".N9D>[x󱷋qT/RD+ L<E^ʬ=-gabf 1<8R=BF=<Fmi4>޳DCo\=BF_"k #4L Vw4'n[jK"(u$WJV%fՊz?\'5-{ ^mf՟nӟ487I=!xcQ<[yد\s*S^ԔMMT,vS܈ϗK=RZ sxt|b׋˥."۝JK?\HƠ `Aķ %~ķokG-^~f/ nۨ EU/\4tۓ7r3(6uiZӬt@;-j@!XQsmWfe_T`UEI*D]Soay|}ff(bd@33(r&}j{z(#nʈzx(c`2 &cў|PmOMR:SRO6tˊ<=lhF]D\8-.Mc`IF rp =O) F#A B(s{`n`;XcEDGEzG$ܥ0'ɞ==R -Ct޹*,HF&L$@Bs&*USIZkVlD4,VP3PmG2jf_Ч}u?c\w) ;bIo {iz맾?9D>]2TfI&W΃vqEDsCBZ;e9oyݣ;8)ymjZ•ahETZJ齽ȇĵ/oF^- Ą ǣF(gUDFW7Ơ G(jA]}RWՁU\\1xх*!&sOE.zA0j\8Xf KܤHlAF!ܛl%Cd83f;7ƪrƕUSEiʐٶMmUtKF:DcY) D5Ne{f5Y"C":KTѯ%ZƉF5~JR/Z(*g@^e_Y@nY, lkQ:Rc95XȆ+*6UmZ cdbi.S%?kpj(e5E=D޳aR+uDcIJĦ锫@5Y:)H;njt7n+]9ߘ0%JxʢH _ 4~kR}g|R_(E!ZB;*@ZK~7O,`u5)${%TބXQuAChџđ[_ H$V5 @eҝW,{qk8gI)k"70!{;}N9|޿mpf7Kca~ ]joOк9$|mSCw$f FgS!Cj-I]Bda %4vR=X8PJ i]ZÓD6 #cdeB$ _r&2ϵ poׁUMFyǪ͕CӖ^wDk>U qNߕ6Gp XNV#>Wۮw?7 N*1lk叵Z'Vꨇ꫟_}kGi%Q,;_W$;JtPǯ\P'7dd QR#/ |0Jɵ'`z$&_O$ #1;Ť!ɄCr;}8+}m7Vb8QXC2=H6&ky&1SBtFQ@5jęGЫ1J??zƶ2陝1m^*SuO"ۺ簐7nI6EzǻKn21wnE_A'nYؑ-qM)86v0n[ bL'c|^)[z\wa!oD{6MAV(a+㎞Vƌ0[ID!,{dqul,AQR([)ڲjOjgiU#TG EdՖ{Y:!Uxn +=L>ՄH3RPqV *v_i@TS!e"=BA[WdQZ(utR*l[ B˭uƺtZUF1+ZG`F 5۷dtPI%M&|*}p=J-rB44p\%o <1 ;*`v4 ո$t!{&TJ jLϭYh)JKɜJB{nE~8!8br<iĬ;K%(ve9"eWT[T Wc_Vr_m=Z79$?}OR;^Q ͣT4XBt1A~[>K(Fhyio;Y0?.[JǰD7i ӿ4Xؗ RXFD2` C,m՚WYnݾOZ8 $Il)UV([˦KR$ B'ʶ-}-UjR% #8U VX*--Uj iEӊJa RU8JF 4;?3ڮJe߆8R\h!,!)Q[PMPzB5 -+Zg$OHehE+ 4L6=7Gzr ]HAD8Y:{y7չ_̊ |`rr |X0_x͏&}ۑvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000006274022015136467443017715 0ustar rootrootJan 28 18:13:07 crc systemd[1]: Starting Kubernetes Kubelet... Jan 28 18:13:07 crc restorecon[4694]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 28 18:13:10 crc kubenswrapper[4985]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 18:13:10 crc kubenswrapper[4985]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 28 18:13:10 crc kubenswrapper[4985]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 18:13:10 crc kubenswrapper[4985]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 18:13:10 crc kubenswrapper[4985]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 28 18:13:10 crc kubenswrapper[4985]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.529587 4985 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537719 4985 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537762 4985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537768 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537776 4985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537783 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537792 4985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537801 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537807 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537813 4985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537819 4985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537825 4985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537831 4985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537837 4985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537843 4985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537848 4985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537854 4985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537860 4985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537866 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537872 4985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537878 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537883 4985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537889 4985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537895 4985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537901 4985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537907 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537912 4985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537918 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537924 4985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537929 4985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537944 4985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537950 4985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537956 4985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537962 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537968 4985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537974 4985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537980 4985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537986 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537992 4985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537998 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.538005 4985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.538012 4985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.538020 4985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.538028 4985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543003 4985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543025 4985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543035 4985 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543041 4985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543049 4985 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543055 4985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543062 4985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543068 4985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543075 4985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543081 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543087 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543094 4985 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543101 4985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543108 4985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543114 4985 feature_gate.go:330] unrecognized feature gate: Example Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543121 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543129 4985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543135 4985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543142 4985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543148 4985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543154 4985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543162 4985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543171 4985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543177 4985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543184 4985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543191 4985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543198 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543204 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585898 4985 flags.go:64] FLAG: --address="0.0.0.0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585927 4985 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585943 4985 flags.go:64] FLAG: --anonymous-auth="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585960 4985 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585970 4985 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585977 4985 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585987 4985 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585996 4985 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586004 4985 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586012 4985 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586020 4985 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586028 4985 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586035 4985 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586044 4985 flags.go:64] FLAG: --cgroup-root="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586050 4985 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586057 4985 flags.go:64] FLAG: --client-ca-file="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586063 4985 flags.go:64] FLAG: --cloud-config="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586070 4985 flags.go:64] FLAG: --cloud-provider="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586076 4985 flags.go:64] FLAG: --cluster-dns="[]" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586085 4985 flags.go:64] FLAG: --cluster-domain="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586091 4985 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586098 4985 flags.go:64] FLAG: --config-dir="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586104 4985 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586112 4985 flags.go:64] FLAG: --container-log-max-files="5" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586121 4985 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586127 4985 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586135 4985 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586142 4985 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586148 4985 flags.go:64] FLAG: --contention-profiling="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586155 4985 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586161 4985 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586168 4985 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586174 4985 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586183 4985 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586190 4985 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586197 4985 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586203 4985 flags.go:64] FLAG: --enable-load-reader="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586210 4985 flags.go:64] FLAG: --enable-server="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586217 4985 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586231 4985 flags.go:64] FLAG: --event-burst="100" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586237 4985 flags.go:64] FLAG: --event-qps="50" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586244 4985 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586276 4985 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586285 4985 flags.go:64] FLAG: --eviction-hard="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586295 4985 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586302 4985 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586308 4985 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586316 4985 flags.go:64] FLAG: --eviction-soft="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586323 4985 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586329 4985 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586336 4985 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586343 4985 flags.go:64] FLAG: --experimental-mounter-path="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586349 4985 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586356 4985 flags.go:64] FLAG: --fail-swap-on="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586362 4985 flags.go:64] FLAG: --feature-gates="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586370 4985 flags.go:64] FLAG: --file-check-frequency="20s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586377 4985 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586384 4985 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586391 4985 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586397 4985 flags.go:64] FLAG: --healthz-port="10248" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586404 4985 flags.go:64] FLAG: --help="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586412 4985 flags.go:64] FLAG: --hostname-override="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586418 4985 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586425 4985 flags.go:64] FLAG: --http-check-frequency="20s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586431 4985 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586438 4985 flags.go:64] FLAG: --image-credential-provider-config="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586444 4985 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586451 4985 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586457 4985 flags.go:64] FLAG: --image-service-endpoint="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586464 4985 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586470 4985 flags.go:64] FLAG: --kube-api-burst="100" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586476 4985 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586483 4985 flags.go:64] FLAG: --kube-api-qps="50" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586490 4985 flags.go:64] FLAG: --kube-reserved="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586497 4985 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586504 4985 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586512 4985 flags.go:64] FLAG: --kubelet-cgroups="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586518 4985 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586526 4985 flags.go:64] FLAG: --lock-file="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586532 4985 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586538 4985 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586545 4985 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586555 4985 flags.go:64] FLAG: --log-json-split-stream="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586563 4985 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586569 4985 flags.go:64] FLAG: --log-text-split-stream="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586575 4985 flags.go:64] FLAG: --logging-format="text" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586582 4985 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586589 4985 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586595 4985 flags.go:64] FLAG: --manifest-url="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586629 4985 flags.go:64] FLAG: --manifest-url-header="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586638 4985 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586645 4985 flags.go:64] FLAG: --max-open-files="1000000" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586653 4985 flags.go:64] FLAG: --max-pods="110" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586660 4985 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586666 4985 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586673 4985 flags.go:64] FLAG: --memory-manager-policy="None" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586679 4985 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586686 4985 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586692 4985 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586699 4985 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586716 4985 flags.go:64] FLAG: --node-status-max-images="50" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586722 4985 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586728 4985 flags.go:64] FLAG: --oom-score-adj="-999" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586735 4985 flags.go:64] FLAG: --pod-cidr="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586741 4985 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586751 4985 flags.go:64] FLAG: --pod-manifest-path="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586757 4985 flags.go:64] FLAG: --pod-max-pids="-1" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586765 4985 flags.go:64] FLAG: --pods-per-core="0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586771 4985 flags.go:64] FLAG: --port="10250" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586777 4985 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586783 4985 flags.go:64] FLAG: --provider-id="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586790 4985 flags.go:64] FLAG: --qos-reserved="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586796 4985 flags.go:64] FLAG: --read-only-port="10255" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586803 4985 flags.go:64] FLAG: --register-node="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586809 4985 flags.go:64] FLAG: --register-schedulable="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586815 4985 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586828 4985 flags.go:64] FLAG: --registry-burst="10" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586834 4985 flags.go:64] FLAG: --registry-qps="5" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586841 4985 flags.go:64] FLAG: --reserved-cpus="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586848 4985 flags.go:64] FLAG: --reserved-memory="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586856 4985 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586863 4985 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586869 4985 flags.go:64] FLAG: --rotate-certificates="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586875 4985 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586882 4985 flags.go:64] FLAG: --runonce="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586889 4985 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586896 4985 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586903 4985 flags.go:64] FLAG: --seccomp-default="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586910 4985 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586916 4985 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586923 4985 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586930 4985 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586938 4985 flags.go:64] FLAG: --storage-driver-password="root" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586944 4985 flags.go:64] FLAG: --storage-driver-secure="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586950 4985 flags.go:64] FLAG: --storage-driver-table="stats" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586957 4985 flags.go:64] FLAG: --storage-driver-user="root" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586963 4985 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586970 4985 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586977 4985 flags.go:64] FLAG: --system-cgroups="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586983 4985 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586995 4985 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587001 4985 flags.go:64] FLAG: --tls-cert-file="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587007 4985 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587016 4985 flags.go:64] FLAG: --tls-min-version="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587022 4985 flags.go:64] FLAG: --tls-private-key-file="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587028 4985 flags.go:64] FLAG: --topology-manager-policy="none" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587035 4985 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587042 4985 flags.go:64] FLAG: --topology-manager-scope="container" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587050 4985 flags.go:64] FLAG: --v="2" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587061 4985 flags.go:64] FLAG: --version="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587073 4985 flags.go:64] FLAG: --vmodule="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587083 4985 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587092 4985 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587271 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587279 4985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587286 4985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587293 4985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587300 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587308 4985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587314 4985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587320 4985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587326 4985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587331 4985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587336 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587342 4985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587348 4985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587353 4985 feature_gate.go:330] unrecognized feature gate: Example Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587361 4985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587366 4985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587371 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587376 4985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587382 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587387 4985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587392 4985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587397 4985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587403 4985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587408 4985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587413 4985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587418 4985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587423 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587428 4985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587435 4985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587442 4985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587447 4985 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587454 4985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587460 4985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587466 4985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587471 4985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587477 4985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587483 4985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587490 4985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587496 4985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587501 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587507 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587512 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587517 4985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587522 4985 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587528 4985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587533 4985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587538 4985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587544 4985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587549 4985 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587554 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587560 4985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587565 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587570 4985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587576 4985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587581 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587586 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587591 4985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587596 4985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587601 4985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587606 4985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587611 4985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587617 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587624 4985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587631 4985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587636 4985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587642 4985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587647 4985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587652 4985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587658 4985 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587664 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587669 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587680 4985 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.684502 4985 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.684546 4985 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684628 4985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684637 4985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684643 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684649 4985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684655 4985 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684660 4985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684666 4985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684672 4985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684677 4985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684683 4985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684688 4985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684698 4985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684706 4985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684714 4985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684722 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684730 4985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684738 4985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684744 4985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684750 4985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684756 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684761 4985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684767 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684772 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684778 4985 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684783 4985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684788 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684793 4985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684798 4985 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684803 4985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684810 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684816 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684821 4985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684826 4985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684830 4985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684835 4985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684840 4985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684845 4985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684850 4985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684855 4985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684861 4985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684868 4985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684873 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684878 4985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684883 4985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684888 4985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684893 4985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684898 4985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684905 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684911 4985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684916 4985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684922 4985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684926 4985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684932 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684937 4985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684942 4985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684947 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684952 4985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684957 4985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684962 4985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684966 4985 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684971 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684976 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684981 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684985 4985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684990 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684996 4985 feature_gate.go:330] unrecognized feature gate: Example Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685001 4985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685005 4985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685010 4985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685015 4985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685020 4985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.685029 4985 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685211 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685218 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685224 4985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685230 4985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685235 4985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685240 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685244 4985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685264 4985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685270 4985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685276 4985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685283 4985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685289 4985 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685294 4985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685299 4985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685304 4985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685309 4985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685314 4985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685318 4985 feature_gate.go:330] unrecognized feature gate: Example Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685323 4985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685328 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685333 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685338 4985 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685343 4985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685348 4985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685354 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685358 4985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685363 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685368 4985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685373 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685379 4985 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685384 4985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685389 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685394 4985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685399 4985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685403 4985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685409 4985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685414 4985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685419 4985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685424 4985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685428 4985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685433 4985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685438 4985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685443 4985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685448 4985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685453 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685459 4985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685466 4985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685471 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685477 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685482 4985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685487 4985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685494 4985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685501 4985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685508 4985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685513 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685518 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685524 4985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685529 4985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685534 4985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685540 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685545 4985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685550 4985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685555 4985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685560 4985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685565 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685572 4985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685577 4985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685582 4985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685587 4985 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685592 4985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685596 4985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.685604 4985 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.685831 4985 server.go:940] "Client rotation is on, will bootstrap in background" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.693083 4985 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.693183 4985 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.695237 4985 server.go:997] "Starting client certificate rotation" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.695281 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.696541 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-03 02:42:34.243482987 +0000 UTC Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.696735 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.826922 4985 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.830375 4985 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 18:13:10 crc kubenswrapper[4985]: E0128 18:13:10.833556 4985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.856359 4985 log.go:25] "Validated CRI v1 runtime API" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.024483 4985 log.go:25] "Validated CRI v1 image API" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.026448 4985 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.040674 4985 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-28-18-07-50-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.040714 4985 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.062786 4985 manager.go:217] Machine: {Timestamp:2026-01-28 18:13:11.059922838 +0000 UTC m=+1.886485699 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a73758a0-c5e5-4e2e-bacd-4099da9969a4 BootID:ef51598b-c07a-479e-807b-3fca14f8607d Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d9:ec:ca Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d9:ec:ca Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:1f:d8:b1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:16:1d:3d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ec:ce:8e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3f:88:71 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:82:3c:5c:b0:d7:ac Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:76:bd:68:fe:f8:02 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.063083 4985 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.063267 4985 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.063600 4985 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.063839 4985 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.063874 4985 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.064110 4985 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.064124 4985 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.080709 4985 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.080748 4985 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.107865 4985 state_mem.go:36] "Initialized new in-memory state store" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.108155 4985 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.113945 4985 kubelet.go:418] "Attempting to sync node with API server" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.113981 4985 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.114075 4985 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.114094 4985 kubelet.go:324] "Adding apiserver pod source" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.114112 4985 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 18:13:11 crc kubenswrapper[4985]: W0128 18:13:11.121427 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.121560 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:11 crc kubenswrapper[4985]: W0128 18:13:11.121613 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.121746 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.123128 4985 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.125037 4985 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.126546 4985 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132669 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132694 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132702 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132710 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132722 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132732 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132741 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132753 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132763 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132773 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132784 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132792 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.145779 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.146402 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.146622 4985 server.go:1280] "Started kubelet" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.146844 4985 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.147781 4985 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 18:13:11 crc systemd[1]: Started Kubernetes Kubelet. Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.148870 4985 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.180852 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.180926 4985 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.181529 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:27:05.117889909 +0000 UTC Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.182180 4985 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.182226 4985 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.182426 4985 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.183381 4985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:13:11 crc kubenswrapper[4985]: W0128 18:13:11.183576 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.183713 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.183861 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="200ms" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.186530 4985 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.186551 4985 factory.go:55] Registering systemd factory Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.186560 4985 factory.go:221] Registration of the systemd container factory successfully Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.189514 4985 factory.go:153] Registering CRI-O factory Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.189562 4985 factory.go:221] Registration of the crio container factory successfully Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.189596 4985 factory.go:103] Registering Raw factory Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.189695 4985 manager.go:1196] Started watching for new ooms in manager Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.200995 4985 server.go:460] "Adding debug handlers to kubelet server" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.202157 4985 manager.go:319] Starting recovery of all containers Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.206904 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207034 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207111 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207201 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207300 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207377 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207451 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207523 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207600 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207685 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207765 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207838 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208148 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208233 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208340 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208420 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208502 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208581 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208658 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208749 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208828 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208903 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208979 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209057 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209137 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209217 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209310 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209390 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209465 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209540 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209625 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209701 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209773 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209846 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209920 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210005 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210082 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210152 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210220 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210309 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210384 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210469 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210565 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210649 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210729 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210808 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210894 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210974 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211070 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211161 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211242 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211356 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211447 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211535 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211717 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.201060 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ef7a4e24cefec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:13:11.146573804 +0000 UTC m=+1.973136625,LastTimestamp:2026-01-28 18:13:11.146573804 +0000 UTC m=+1.973136625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211866 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212062 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212086 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212099 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212113 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212126 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212137 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212179 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212191 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212203 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212218 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212230 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212241 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212271 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212288 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212300 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212313 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212329 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212343 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212355 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212368 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212382 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212394 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212409 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212422 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212438 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212451 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212464 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212480 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212495 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212510 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212522 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212535 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212550 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212562 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212574 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212588 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212602 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212613 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212626 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212640 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212652 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212664 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212680 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212693 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212706 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212719 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212733 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212747 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212767 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212781 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212797 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212812 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212826 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212839 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212855 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212870 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212884 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212898 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212915 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212930 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212941 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212954 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212967 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212982 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212997 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213011 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213023 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213037 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213050 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213066 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213081 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213094 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213106 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213129 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213143 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213157 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213174 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213187 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213200 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213212 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213226 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213238 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213267 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213284 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213298 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213311 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213323 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213338 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213350 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213367 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213380 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213393 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213406 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213420 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213433 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213447 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213462 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213475 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213488 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213500 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213513 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213526 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213564 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213584 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213597 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213610 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213624 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213637 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213649 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213662 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213674 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213686 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213702 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213715 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213727 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213742 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213757 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213771 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213787 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213798 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213809 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213822 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213835 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213846 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213858 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213870 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213884 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213896 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213910 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213922 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213935 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213946 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213957 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213966 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213977 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213987 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.214000 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.214011 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.214023 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216137 4985 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216164 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216180 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216192 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216952 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216975 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216991 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217004 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217018 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217031 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217044 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217059 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217073 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217084 4985 reconstruct.go:97] "Volume reconstruction finished" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217093 4985 reconciler.go:26] "Reconciler: start to sync state" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.230578 4985 manager.go:324] Recovery completed Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.244766 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.252296 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.252565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.252580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.256279 4985 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.256388 4985 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.256417 4985 state_mem.go:36] "Initialized new in-memory state store" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.259054 4985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.262594 4985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.262655 4985 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.262695 4985 kubelet.go:2335] "Starting kubelet main sync loop" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.262871 4985 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 18:13:11 crc kubenswrapper[4985]: W0128 18:13:11.265592 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.265710 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.283994 4985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.363370 4985 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.384129 4985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.384560 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="400ms" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.460321 4985 policy_none.go:49] "None policy: Start" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.461594 4985 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.461642 4985 state_mem.go:35] "Initializing new in-memory state store" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.484364 4985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.564344 4985 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.585297 4985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.664949 4985 manager.go:334] "Starting Device Plugin manager" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.665474 4985 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.665508 4985 server.go:79] "Starting device plugin registration server" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.666139 4985 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.666160 4985 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.666427 4985 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.666526 4985 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.666544 4985 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.691169 4985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.766749 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.767698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.767738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.767751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.767781 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.768321 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.786031 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="800ms" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.965292 4985 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.965391 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.966989 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.967027 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.967039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.967186 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.967935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.967960 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.967970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.968364 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.968451 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.968463 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.968484 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.968415 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.968418 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969233 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969296 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969490 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969459 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969586 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969593 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969703 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969724 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969936 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969948 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.969991 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970193 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970220 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970283 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970313 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970334 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970466 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970495 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970924 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970942 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970973 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971071 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971089 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971544 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971834 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971860 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.026981 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027016 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027038 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027055 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027075 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027091 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027107 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027125 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027141 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027157 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027206 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027271 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027294 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027312 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027373 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128180 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128296 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128325 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128349 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128366 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128386 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128445 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128438 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128519 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128507 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128467 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128547 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128619 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128593 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128572 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128597 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128675 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128621 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128623 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128729 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128745 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128702 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128795 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128825 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128847 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128870 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128907 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128932 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128945 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.129045 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.148389 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.182501 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:20:56.485545204 +0000 UTC Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.208246 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.208472 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.319007 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.333195 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.339278 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.364964 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.365069 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.370535 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.371359 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.371673 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.371714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.371727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.371760 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.372248 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.376816 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.444753 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-3dabca12b20e3e6225bcb6e54b01be3faef6f53bb25451609688004b8275f95c WatchSource:0}: Error finding container 3dabca12b20e3e6225bcb6e54b01be3faef6f53bb25451609688004b8275f95c: Status 404 returned error can't find the container with id 3dabca12b20e3e6225bcb6e54b01be3faef6f53bb25451609688004b8275f95c Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.454308 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-447cccc984e8c4acceb497204efff78e4320f6320be8387f3a6d0f95772e0635 WatchSource:0}: Error finding container 447cccc984e8c4acceb497204efff78e4320f6320be8387f3a6d0f95772e0635: Status 404 returned error can't find the container with id 447cccc984e8c4acceb497204efff78e4320f6320be8387f3a6d0f95772e0635 Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.460244 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0cd96db233c3ca92edeb9d45b0051d8aac558d5d1263af9a951a2ba6340c4d12 WatchSource:0}: Error finding container 0cd96db233c3ca92edeb9d45b0051d8aac558d5d1263af9a951a2ba6340c4d12: Status 404 returned error can't find the container with id 0cd96db233c3ca92edeb9d45b0051d8aac558d5d1263af9a951a2ba6340c4d12 Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.461346 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-044b433110577f9f6d86af0e4f25c3cc9f043fe4b9f89a9aa0e7eeb139034a6c WatchSource:0}: Error finding container 044b433110577f9f6d86af0e4f25c3cc9f043fe4b9f89a9aa0e7eeb139034a6c: Status 404 returned error can't find the container with id 044b433110577f9f6d86af0e4f25c3cc9f043fe4b9f89a9aa0e7eeb139034a6c Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.462138 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-d639668d476d32a5f7c5b3fe7f6606100041f06e458095c3f365ae44dcbe708f WatchSource:0}: Error finding container d639668d476d32a5f7c5b3fe7f6606100041f06e458095c3f365ae44dcbe708f: Status 404 returned error can't find the container with id d639668d476d32a5f7c5b3fe7f6606100041f06e458095c3f365ae44dcbe708f Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.586849 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="1.6s" Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.659148 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.659302 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.712156 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.712325 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.954243 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.955350 4985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.147705 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.173053 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.174950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.175014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.175036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.175086 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:13 crc kubenswrapper[4985]: E0128 18:13:13.175801 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.182974 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:17:39.63451653 +0000 UTC Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.283648 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"044b433110577f9f6d86af0e4f25c3cc9f043fe4b9f89a9aa0e7eeb139034a6c"} Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.285062 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"447cccc984e8c4acceb497204efff78e4320f6320be8387f3a6d0f95772e0635"} Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.286341 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3dabca12b20e3e6225bcb6e54b01be3faef6f53bb25451609688004b8275f95c"} Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.288377 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d639668d476d32a5f7c5b3fe7f6606100041f06e458095c3f365ae44dcbe708f"} Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.290321 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0cd96db233c3ca92edeb9d45b0051d8aac558d5d1263af9a951a2ba6340c4d12"} Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.148061 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.208981 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:08:48.467441226 +0000 UTC Jan 28 18:13:14 crc kubenswrapper[4985]: E0128 18:13:14.209427 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="3.2s" Jan 28 18:13:14 crc kubenswrapper[4985]: W0128 18:13:14.286526 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:14 crc kubenswrapper[4985]: E0128 18:13:14.286596 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.776051 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.779171 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.779308 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.779351 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.779407 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:14 crc kubenswrapper[4985]: E0128 18:13:14.780160 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Jan 28 18:13:15 crc kubenswrapper[4985]: W0128 18:13:15.007449 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:15 crc kubenswrapper[4985]: E0128 18:13:15.007601 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.148325 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.209693 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 09:38:26.183363736 +0000 UTC Jan 28 18:13:15 crc kubenswrapper[4985]: W0128 18:13:15.236135 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:15 crc kubenswrapper[4985]: E0128 18:13:15.236329 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.297913 4985 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945" exitCode=0 Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.298018 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945"} Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.298141 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.299619 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.299711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.299748 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.302712 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db"} Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.305350 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415" exitCode=0 Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.305561 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.305624 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415"} Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.307417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.307467 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.307487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.308466 4985 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b67bc07dc45b6a6e977056c19d50bc4d8bee92234b25b1f67975101c4a295d85" exitCode=0 Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.308567 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b67bc07dc45b6a6e977056c19d50bc4d8bee92234b25b1f67975101c4a295d85"} Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.308638 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.310068 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311004 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311042 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311060 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311244 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311536 4985 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5" exitCode=0 Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311584 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5"} Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311699 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.312993 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.313029 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.313048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:15 crc kubenswrapper[4985]: W0128 18:13:15.874069 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:15 crc kubenswrapper[4985]: E0128 18:13:15.874172 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.148127 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.210905 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:19:40.689302032 +0000 UTC Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.318002 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.318070 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.320075 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.320101 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.322331 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.322409 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.323988 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"83f697b1c16bcd1e36101e6b455b45641dbffe1cbf333e78f6a61de9228652f5"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.324026 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.324753 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.324787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.324799 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.325769 4985 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07" exitCode=0 Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.325826 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.325859 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.326888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.326914 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.326923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.973724 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 18:13:16 crc kubenswrapper[4985]: E0128 18:13:16.975522 4985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.148786 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.211102 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:01:37.786958139 +0000 UTC Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.332172 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd"} Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.332365 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.333654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.333704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.333722 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.336851 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b"} Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.336876 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.337998 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.338039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.338056 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.339969 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6"} Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.342812 4985 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5" exitCode=0 Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.342873 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5"} Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.342927 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.342928 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.344290 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.344324 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.344344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.344402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.344429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.344448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:17 crc kubenswrapper[4985]: E0128 18:13:17.410688 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="6.4s" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.571794 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.730720 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.731228 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.731363 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.980932 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.982124 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.982155 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.982167 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.982193 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:17 crc kubenswrapper[4985]: E0128 18:13:17.982850 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.147846 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.212033 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:13:43.82982188 +0000 UTC Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.352237 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b"} Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.352312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e"} Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.352329 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6"} Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.356937 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"50c7c1874aa8d1bddf5c1a8a85bf187572aa21fe849a04e4c4c0b5ddba7b00fc"} Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.356995 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0"} Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.357005 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.357097 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.357104 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.357179 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358214 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358228 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358238 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358265 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358552 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358591 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:18 crc kubenswrapper[4985]: E0128 18:13:18.641293 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ef7a4e24cefec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:13:11.146573804 +0000 UTC m=+1.973136625,LastTimestamp:2026-01-28 18:13:11.146573804 +0000 UTC m=+1.973136625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.704237 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:18 crc kubenswrapper[4985]: W0128 18:13:18.932044 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:18 crc kubenswrapper[4985]: E0128 18:13:18.932156 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.148186 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.212426 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:07:23.448605328 +0000 UTC Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.361833 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.364166 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="50c7c1874aa8d1bddf5c1a8a85bf187572aa21fe849a04e4c4c0b5ddba7b00fc" exitCode=255 Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.364267 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"50c7c1874aa8d1bddf5c1a8a85bf187572aa21fe849a04e4c4c0b5ddba7b00fc"} Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.364398 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.365910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.365950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.365965 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.366661 4985 scope.go:117] "RemoveContainer" containerID="50c7c1874aa8d1bddf5c1a8a85bf187572aa21fe849a04e4c4c0b5ddba7b00fc" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.369342 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e"} Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.369378 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.369404 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8"} Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.369469 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.369496 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.370451 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.370475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.370486 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.370514 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.370534 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.370546 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.371574 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.371599 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.371606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.130317 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.130700 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.130757 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.148137 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.213045 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 07:15:44.042354573 +0000 UTC Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.215327 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.373926 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.375521 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4"} Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.375642 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.375686 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.375643 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.376665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.376703 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.376665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.376716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.376726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.376738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.214238 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:10:59.533753477 +0000 UTC Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.377463 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.377503 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.377466 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.378549 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.378624 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.378639 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.379210 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.379270 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.379284 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:21 crc kubenswrapper[4985]: E0128 18:13:21.691974 4985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.888722 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.161600 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.162295 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.164322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.164487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.164513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.215084 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:31:44.389974342 +0000 UTC Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.380781 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.380805 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.382984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.383035 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.383054 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.383588 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.383811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.384008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:23 crc kubenswrapper[4985]: I0128 18:13:23.216144 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:27:01.375896519 +0000 UTC Jan 28 18:13:24 crc kubenswrapper[4985]: I0128 18:13:24.217069 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:38:16.591127887 +0000 UTC Jan 28 18:13:24 crc kubenswrapper[4985]: I0128 18:13:24.383353 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:24 crc kubenswrapper[4985]: I0128 18:13:24.385387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:24 crc kubenswrapper[4985]: I0128 18:13:24.385443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:24 crc kubenswrapper[4985]: I0128 18:13:24.385460 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:24 crc kubenswrapper[4985]: I0128 18:13:24.385517 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:25 crc kubenswrapper[4985]: I0128 18:13:25.217318 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:07:53.978459953 +0000 UTC Jan 28 18:13:25 crc kubenswrapper[4985]: I0128 18:13:25.521082 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 18:13:26 crc kubenswrapper[4985]: I0128 18:13:26.022632 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:26 crc kubenswrapper[4985]: I0128 18:13:26.023021 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:26 crc kubenswrapper[4985]: I0128 18:13:26.024829 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:26 crc kubenswrapper[4985]: I0128 18:13:26.024898 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:26 crc kubenswrapper[4985]: I0128 18:13:26.024927 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:26 crc kubenswrapper[4985]: I0128 18:13:26.218096 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:14:32.985380704 +0000 UTC Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.218782 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:03:08.57583181 +0000 UTC Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.738660 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.738812 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.740073 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.740318 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.740477 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.749064 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.218946 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:44:41.101007374 +0000 UTC Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.399270 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.400308 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.400372 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.400387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.526225 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.526300 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 18:13:29 crc kubenswrapper[4985]: I0128 18:13:29.023158 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:13:29 crc kubenswrapper[4985]: I0128 18:13:29.023410 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:13:29 crc kubenswrapper[4985]: I0128 18:13:29.220067 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:49:46.769441408 +0000 UTC Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.138333 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.138965 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.140991 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.141058 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.141078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.146751 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.220626 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:37:16.427270694 +0000 UTC Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.332014 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.332333 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.334036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.334081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.334093 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.347672 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.405317 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.405349 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.406861 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.406957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.406973 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.406985 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.407031 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.407051 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:31 crc kubenswrapper[4985]: I0128 18:13:31.221812 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:10:09.063673005 +0000 UTC Jan 28 18:13:31 crc kubenswrapper[4985]: E0128 18:13:31.692470 4985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 18:13:32 crc kubenswrapper[4985]: I0128 18:13:32.222754 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:17:59.647061462 +0000 UTC Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.544476 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:40:48.808006848 +0000 UTC Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.824799 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.855391 4985 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.855422 4985 trace.go:236] Trace[54370517]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 18:13:21.356) (total time: 12498ms): Jan 28 18:13:33 crc kubenswrapper[4985]: Trace[54370517]: ---"Objects listed" error: 12498ms (18:13:33.855) Jan 28 18:13:33 crc kubenswrapper[4985]: Trace[54370517]: [12.498801087s] [12.498801087s] END Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.855444 4985 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.855604 4985 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.866183 4985 trace.go:236] Trace[1451274034]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 18:13:20.828) (total time: 13037ms): Jan 28 18:13:33 crc kubenswrapper[4985]: Trace[1451274034]: ---"Objects listed" error: 13037ms (18:13:33.866) Jan 28 18:13:33 crc kubenswrapper[4985]: Trace[1451274034]: [13.037535893s] [13.037535893s] END Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.866216 4985 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.866686 4985 trace.go:236] Trace[291536343]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 18:13:21.153) (total time: 12713ms): Jan 28 18:13:33 crc kubenswrapper[4985]: Trace[291536343]: ---"Objects listed" error: 12713ms (18:13:33.866) Jan 28 18:13:33 crc kubenswrapper[4985]: Trace[291536343]: [12.713250654s] [12.713250654s] END Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.866716 4985 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.870838 4985 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.875294 4985 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.875425 4985 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.876699 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.876734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.876749 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.876764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.876773 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:33Z","lastTransitionTime":"2026-01-28T18:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.916342 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.923052 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.923103 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.923120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.923145 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.923160 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:33Z","lastTransitionTime":"2026-01-28T18:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.936854 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.941036 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51904->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.941107 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51904->192.168.126.11:17697: read: connection reset by peer" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.941566 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.941616 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.941856 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.941882 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.943703 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.943754 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.943770 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.943791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.943804 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:33Z","lastTransitionTime":"2026-01-28T18:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.954433 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.958006 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.958040 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.958052 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.958072 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.958085 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:33Z","lastTransitionTime":"2026-01-28T18:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.967059 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.970642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.970690 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.970702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.970722 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.970736 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:33Z","lastTransitionTime":"2026-01-28T18:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.980737 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.980913 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.982606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.982633 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.982646 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.982666 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.982680 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:33Z","lastTransitionTime":"2026-01-28T18:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.085671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.085734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.085757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.085803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.085833 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.138016 4985 apiserver.go:52] "Watching apiserver" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.172808 4985 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.173330 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.173925 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.174051 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.174150 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.174316 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.174427 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.174545 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.174667 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.174745 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.174913 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.176750 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.177753 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.178124 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.178439 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.179319 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.179465 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.179735 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.179787 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.183507 4985 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.183863 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.188599 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.188650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.188671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.188696 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.188714 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.227039 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.245993 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258192 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258242 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258296 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258326 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258352 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258378 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258403 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258432 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258457 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258481 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258506 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258530 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258552 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258567 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258577 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258632 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258658 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258649 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258686 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258773 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258800 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258823 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258842 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258842 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258861 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258881 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258900 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258916 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258935 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258951 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258973 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258990 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259009 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259025 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259041 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259057 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259074 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259123 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259141 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259157 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259167 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259217 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259237 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259279 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259300 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259318 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259366 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259387 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259403 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259423 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259443 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259464 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259482 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259500 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259520 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259563 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259587 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259606 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259632 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259648 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259665 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259683 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259703 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259653 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259724 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259782 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259812 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259834 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259855 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259872 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259916 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259933 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259950 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259966 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259989 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260007 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260031 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260049 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260011 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260067 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260102 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260115 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260125 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260148 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260177 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260202 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260233 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260286 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260294 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260314 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260342 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260370 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260385 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260398 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260415 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260422 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260450 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260477 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260480 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260507 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260534 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260559 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260586 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260567 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260616 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260646 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260678 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260707 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260733 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260761 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260788 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260815 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260841 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260868 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260892 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260917 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260944 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260968 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260994 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261014 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261038 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261063 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261090 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261117 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261140 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261163 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261187 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261212 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261236 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261263 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261302 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261329 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261354 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261376 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261399 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261423 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261449 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261478 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261505 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261535 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260762 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263058 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260850 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260886 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261020 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261039 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261162 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261340 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261336 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261558 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.261584 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:13:34.761546196 +0000 UTC m=+25.588109047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261597 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261257 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.262462 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263125 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.262749 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.262816 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263041 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263573 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263640 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263690 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263757 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263848 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264068 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264023 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264138 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264256 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264292 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264300 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264506 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264524 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264536 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264556 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264669 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.265238 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.265257 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.265258 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.265979 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.265999 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266264 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266431 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266445 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266712 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266746 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266812 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266859 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267073 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267135 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267411 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267414 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267478 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267731 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267953 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.268029 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.268207 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.268519 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.268794 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.268959 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.268964 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269016 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269050 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269079 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269116 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269142 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269519 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269541 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269567 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269595 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269613 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269632 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269650 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269672 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269692 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269710 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269749 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269771 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269802 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269822 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269841 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269861 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269882 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269904 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269924 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269904 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269943 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270095 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270147 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270186 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270215 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270243 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270297 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270324 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270353 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270379 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270406 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270446 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270473 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270500 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270526 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270552 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270580 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270608 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270636 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270673 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270699 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270725 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270751 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270780 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270809 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270834 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270861 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270888 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270913 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270938 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270978 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271003 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271030 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271059 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271090 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271119 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271152 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271185 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271227 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271258 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271329 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271372 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271398 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271425 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271451 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271474 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271536 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271568 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271621 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271660 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271698 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271729 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271772 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271803 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271835 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271868 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271896 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271929 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271958 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271983 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272086 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272104 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272118 4985 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272134 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272150 4985 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272164 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272179 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272194 4985 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272208 4985 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272221 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272236 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272251 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272302 4985 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272318 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272334 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272347 4985 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272362 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272375 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272391 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272408 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272424 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272441 4985 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272454 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272470 4985 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272501 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269205 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269247 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269357 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269491 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269951 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269970 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270331 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270470 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270480 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270751 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270890 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271108 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.273233 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271519 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272033 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272100 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272712 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.273316 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.273498 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.273736 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.274031 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.274234 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.273085 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.275524 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.276006 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.276560 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.276569 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.276789 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277129 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277254 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277491 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277754 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277769 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277788 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277986 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.278668 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.278988 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.279051 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.279286 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.279455 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.279636 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.279717 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.280536 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.280863 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.280971 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.281171 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.281545 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.281783 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.281976 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.282074 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.282528 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.282917 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.282948 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.282610 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.283205 4985 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.283566 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.283586 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.283690 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.284044 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.284113 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:34.78409521 +0000 UTC m=+25.610658031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.284345 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.284627 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.284714 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.284748 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.284705 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.284806 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.284841 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:34.78481582 +0000 UTC m=+25.611378861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285004 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285206 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285315 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285412 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285501 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285329 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285865 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.286017 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.286162 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.286444 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.286570 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.290486 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.293649 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.293758 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.293837 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.293907 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.294047 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.294140 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.294350 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.294575 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.295317 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.301238 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301551 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301575 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.301567 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301592 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301847 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:34.801828521 +0000 UTC m=+25.628391342 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301625 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301884 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301895 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301921 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:34.801915203 +0000 UTC m=+25.628478014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.302079 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.302102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.302110 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.302124 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.302134 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.302742 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.303623 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.303766 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.304870 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.305908 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.305948 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.306289 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.306436 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.311905 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.313057 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315108 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315122 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315200 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315396 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315600 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315818 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315940 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.316548 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315137 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.317886 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.318658 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.318967 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.318951 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.319146 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.319877 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.319974 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.319991 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.320607 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.320643 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.320773 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.321341 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.321977 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.323184 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.323387 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.323465 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.323711 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.325454 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.325590 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.325641 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.325764 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.326683 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.329625 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.331964 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.332576 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.346407 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.346760 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.373949 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374028 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374111 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374149 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374157 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374210 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374229 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374236 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374243 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374299 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374309 4985 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374519 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374591 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374973 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.375536 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.375557 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.375567 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.375970 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.375981 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.376211 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.376222 4985 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.377217 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.377567 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.377727 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.377991 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378303 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378367 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378449 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378525 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378609 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378622 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378633 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378643 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378653 4985 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.379034 4985 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.379115 4985 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.379543 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.381622 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.381888 4985 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.381953 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.381968 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.381996 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.382022 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.382035 4985 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.382046 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383458 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383495 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383521 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383542 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383565 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383582 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383596 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383616 4985 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383630 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383644 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383658 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383678 4985 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383691 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383705 4985 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383720 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383738 4985 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383753 4985 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383767 4985 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383787 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383813 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383826 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383838 4985 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383885 4985 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383899 4985 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383911 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383923 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383940 4985 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383977 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383990 4985 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384001 4985 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384021 4985 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384033 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384045 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384061 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384100 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384117 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384132 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384152 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384191 4985 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384206 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384220 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384238 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384286 4985 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384302 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384319 4985 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384331 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384368 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384382 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384398 4985 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384410 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384465 4985 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384476 4985 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384495 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384507 4985 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384542 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384555 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384572 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384584 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384621 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384639 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384650 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384661 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384672 4985 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384707 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384720 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384731 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384745 4985 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384780 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384795 4985 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384810 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384824 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384864 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384879 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384892 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384908 4985 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384925 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384959 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384971 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384989 4985 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385001 4985 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385035 4985 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385047 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385064 4985 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385075 4985 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385088 4985 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385126 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385138 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385150 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385162 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385195 4985 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385209 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385220 4985 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385232 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385282 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385296 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385313 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385325 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385365 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385381 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385396 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385413 4985 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385444 4985 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385460 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385475 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385493 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385509 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385548 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385561 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385578 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385589 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385625 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385641 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385654 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385666 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385702 4985 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385719 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385729 4985 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385741 4985 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385752 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385791 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385803 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385814 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385825 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385861 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385875 4985 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385886 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.393792 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.395585 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.404638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.404686 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.404697 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.404715 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.404728 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.418022 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.487326 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.487376 4985 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.487396 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.502834 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.507539 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.507603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.507622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.507642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.507655 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: W0128 18:13:34.516668 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-954dfbe20458f2a276ef5b967ea13a5b9e9aba3d9c2e1d94ec51df169d549692 WatchSource:0}: Error finding container 954dfbe20458f2a276ef5b967ea13a5b9e9aba3d9c2e1d94ec51df169d549692: Status 404 returned error can't find the container with id 954dfbe20458f2a276ef5b967ea13a5b9e9aba3d9c2e1d94ec51df169d549692 Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.518230 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.531580 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: W0128 18:13:34.535845 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-3a93b0123b7822ad929b9d678b628398e9494e3b0d5796f8c0d14e9c2e51d3aa WatchSource:0}: Error finding container 3a93b0123b7822ad929b9d678b628398e9494e3b0d5796f8c0d14e9c2e51d3aa: Status 404 returned error can't find the container with id 3a93b0123b7822ad929b9d678b628398e9494e3b0d5796f8c0d14e9c2e51d3aa Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.545422 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:04:21.354606893 +0000 UTC Jan 28 18:13:34 crc kubenswrapper[4985]: W0128 18:13:34.549564 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-24d3f0d27159e20caf8fe78a5888ed66791a2f6c90e8acd59af0d337112c26eb WatchSource:0}: Error finding container 24d3f0d27159e20caf8fe78a5888ed66791a2f6c90e8acd59af0d337112c26eb: Status 404 returned error can't find the container with id 24d3f0d27159e20caf8fe78a5888ed66791a2f6c90e8acd59af0d337112c26eb Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.611233 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.611331 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.611349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.611379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.611399 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.714488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.714531 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.714544 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.714561 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.714572 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.790202 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.790344 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.790386 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.790428 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:13:35.790381512 +0000 UTC m=+26.616944373 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.790523 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.790573 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.790603 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:35.790584667 +0000 UTC m=+26.617147618 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.790715 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:35.7906882 +0000 UTC m=+26.617251121 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.816766 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.816823 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.816840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.816863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.816881 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.891431 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.891511 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.891694 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.891724 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.891744 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.891818 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:35.89179565 +0000 UTC m=+26.718358511 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.892232 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.892310 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.892330 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.892415 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:35.892393667 +0000 UTC m=+26.718956498 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.919938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.919999 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.920017 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.920043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.920061 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.022757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.022789 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.022798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.022812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.022822 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.126366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.126424 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.126441 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.126468 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.126487 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.230815 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.230879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.230896 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.230924 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.230946 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.270069 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.270589 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.271355 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.271942 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.334079 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.334129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.334140 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.334162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.334178 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.425908 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.426551 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.435447 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" exitCode=255 Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.437168 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.437316 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.437397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.437479 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.437551 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.446437 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.447345 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.448568 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.450471 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.451643 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.453156 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.484804 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.485806 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.540979 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.541026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.541042 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.541066 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.541081 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.546175 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:26:51.213443775 +0000 UTC Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.546859 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.547887 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.617683 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.618466 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.644558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.644976 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.644997 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.645021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.645035 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.747735 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.747772 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.747780 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.747795 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.747806 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.799898 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.800015 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.800050 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.800153 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.800184 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:13:37.800144377 +0000 UTC m=+28.626707218 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.800237 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:37.800224559 +0000 UTC m=+28.626787520 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.800395 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.800447 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:37.800438305 +0000 UTC m=+28.627001136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.850954 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.851037 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.851061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.851098 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.851123 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.878367 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.878823 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.879620 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.880214 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.880707 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.881256 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.881777 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.882473 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.882917 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.883607 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.884276 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.885450 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.886132 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.886687 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.887240 4985 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.887418 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.888807 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.900317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.900380 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900463 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900486 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900499 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900561 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:37.900544267 +0000 UTC m=+28.727107088 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900661 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900719 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900743 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900833 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:37.900804974 +0000 UTC m=+28.727367835 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.953603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.953752 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.953771 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.953793 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.953809 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.054375 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.055882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.055910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.055920 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.056124 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.056134 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.056220 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.060583 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.092139 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.093927 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.095587 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.097147 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.099963 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.102581 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.103693 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.104392 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.104921 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.105542 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.106069 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.107717 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.108714 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.109300 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.110290 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.110878 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.111984 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.112586 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113151 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"24d3f0d27159e20caf8fe78a5888ed66791a2f6c90e8acd59af0d337112c26eb"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113190 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113203 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3a93b0123b7822ad929b9d678b628398e9494e3b0d5796f8c0d14e9c2e51d3aa"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113309 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113327 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"954dfbe20458f2a276ef5b967ea13a5b9e9aba3d9c2e1d94ec51df169d549692"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113339 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113369 4985 scope.go:117] "RemoveContainer" containerID="50c7c1874aa8d1bddf5c1a8a85bf187572aa21fe849a04e4c4c0b5ddba7b00fc" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.118560 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.129829 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.141064 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.153096 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.159000 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.159045 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.159058 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.159081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.159099 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.166395 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.180215 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.190483 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.199354 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.211729 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.223084 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.232205 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.246178 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.256531 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.262039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.262087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.262100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.262122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.262136 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.263223 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.263241 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.263223 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:36 crc kubenswrapper[4985]: E0128 18:13:36.263374 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:36 crc kubenswrapper[4985]: E0128 18:13:36.263488 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:36 crc kubenswrapper[4985]: E0128 18:13:36.263660 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.309835 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.310796 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.311024 4985 scope.go:117] "RemoveContainer" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:13:36 crc kubenswrapper[4985]: E0128 18:13:36.311290 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.365481 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.365514 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.365527 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.365547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.365557 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.441356 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.448579 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.455510 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.458851 4985 scope.go:117] "RemoveContainer" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:13:36 crc kubenswrapper[4985]: E0128 18:13:36.459229 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.459820 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: E0128 18:13:36.465668 4985 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.468371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.468518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.468587 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.468647 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.468714 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.480549 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.492412 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.501168 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.512108 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.520239 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.530353 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c7c1874aa8d1bddf5c1a8a85bf187572aa21fe849a04e4c4c0b5ddba7b00fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:19Z\\\",\\\"message\\\":\\\"W0128 18:13:18.585836 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 18:13:18.586705 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769623998 cert, and key in /tmp/serving-cert-3647538429/serving-signer.crt, /tmp/serving-cert-3647538429/serving-signer.key\\\\nI0128 18:13:18.896551 1 observer_polling.go:159] Starting file observer\\\\nW0128 18:13:18.981716 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:18.981881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:18.988226 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3647538429/tls.crt::/tmp/serving-cert-3647538429/tls.key\\\\\\\"\\\\nF0128 18:13:19.174577 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.539836 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.546992 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:37:14.156060105 +0000 UTC Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.550010 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.559226 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.569598 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.571769 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.571811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.571819 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.571836 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.571848 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.583153 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.594594 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.603357 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.614018 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.624409 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.675223 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.675299 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.675313 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.675336 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.675350 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.778795 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.778842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.778855 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.778875 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.778889 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.882081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.882121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.882131 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.882148 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.882158 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.985068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.985128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.985137 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.985155 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.985167 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.088069 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.088107 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.088119 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.088135 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.088148 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.190759 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.190849 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.190870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.190897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.190919 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.293482 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.293518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.293526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.293540 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.293550 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.396610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.396667 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.396685 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.396712 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.396729 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.499659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.499765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.499780 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.499802 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.499822 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.547443 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:33:29.784992068 +0000 UTC Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.602526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.602565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.602575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.602591 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.602601 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.705221 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.705261 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.705284 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.705307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.705323 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.808520 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.808963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.809042 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.809158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.809241 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.820277 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.820446 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.820581 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.820747 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.820901 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:41.820878178 +0000 UTC m=+32.647441009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.821498 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:13:41.821484705 +0000 UTC m=+32.648047526 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.821686 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.821819 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:41.821792764 +0000 UTC m=+32.648355585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.911716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.912082 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.912176 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.912321 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.912464 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.921779 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.921937 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.922203 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.922344 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.922438 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.922572 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:41.922553414 +0000 UTC m=+32.749116245 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.923314 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.923466 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.923557 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.923687 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:41.923664925 +0000 UTC m=+32.750227766 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.015760 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.015810 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.015823 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.015842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.015859 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.118080 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.118124 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.118134 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.118154 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.118165 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.220935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.220973 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.220982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.220995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.221005 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.263628 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.263728 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:38 crc kubenswrapper[4985]: E0128 18:13:38.263772 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.263749 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:38 crc kubenswrapper[4985]: E0128 18:13:38.264066 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:38 crc kubenswrapper[4985]: E0128 18:13:38.264183 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.323407 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.323449 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.323461 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.323479 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.323490 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.426380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.426418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.426428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.426444 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.426454 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.466868 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.480672 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.494876 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.506691 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.520820 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.529069 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.529129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.529150 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.529179 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.529204 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.532175 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.544807 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.548270 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 08:32:13.696351274 +0000 UTC Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.562117 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.578529 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.632538 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.632601 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.632613 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.632633 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.632648 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.735343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.735380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.735389 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.735405 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.735415 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.779894 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.780792 4985 scope.go:117] "RemoveContainer" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:13:38 crc kubenswrapper[4985]: E0128 18:13:38.781004 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.838777 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.838828 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.838840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.838863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.838875 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.941311 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.941371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.941389 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.941416 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.941436 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.044131 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.044191 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.044207 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.044229 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.044242 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.147138 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.147172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.147181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.147196 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.147208 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.249986 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.250053 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.250074 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.250134 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.250154 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.352933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.352968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.352982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.352998 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.353010 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.456187 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.456301 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.456327 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.456356 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.456399 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.549481 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:20:44.131033658 +0000 UTC Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.559014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.559068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.559092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.559133 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.559153 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.662010 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.662059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.662071 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.662091 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.662104 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.765474 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.765530 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.765544 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.765568 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.765584 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.868970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.869036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.869047 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.869068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.869080 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.967854 4985 csr.go:261] certificate signing request csr-mk7bs is approved, waiting to be issued Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.971959 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.972158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.972223 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.972322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.972422 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.014229 4985 csr.go:257] certificate signing request csr-mk7bs is issued Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.075566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.075616 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.075631 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.075649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.075660 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.178465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.178515 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.178526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.178543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.178555 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.263518 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.263533 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.263559 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:40 crc kubenswrapper[4985]: E0128 18:13:40.264109 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:40 crc kubenswrapper[4985]: E0128 18:13:40.264144 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:40 crc kubenswrapper[4985]: E0128 18:13:40.264081 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.281049 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.281095 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.281109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.281128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.281143 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.383565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.383607 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.383617 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.383634 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.383644 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.417128 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-g2g4k"] Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.417562 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-9xm27"] Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.417751 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.417802 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.419648 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-6j9qp"] Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.420399 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.421102 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.421142 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.421301 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.421371 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.422048 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.422540 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.422749 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.423016 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.423859 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.425810 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.441157 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443677 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-conf-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443721 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-cnibin\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443746 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-cni-binary-copy\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443767 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-k8s-cni-cncf-io\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443792 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-netns\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443815 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-os-release\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443838 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-os-release\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443857 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1301b014-a9ed-4b29-8dc2-86c01d6bd13a-hosts-file\") pod \"node-resolver-9xm27\" (UID: \"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\") " pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443876 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-cni-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443898 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-socket-dir-parent\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443918 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhcbz\" (UniqueName: \"kubernetes.io/projected/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-kube-api-access-xhcbz\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443939 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443959 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz4mz\" (UniqueName: \"kubernetes.io/projected/1301b014-a9ed-4b29-8dc2-86c01d6bd13a-kube-api-access-xz4mz\") pod \"node-resolver-9xm27\" (UID: \"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\") " pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443985 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-system-cni-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444006 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-multus-certs\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444031 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-cni-multus\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444052 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-kubelet\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444098 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-cni-bin\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444120 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cnibin\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444155 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-hostroot\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444176 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-etc-kubernetes\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444199 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-system-cni-dir\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444232 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444278 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-daemon-config\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444340 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444361 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj2r9\" (UniqueName: \"kubernetes.io/projected/82fb0eec-adf5-4743-979d-6b7bf729e4f5-kube-api-access-qj2r9\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.459996 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.472956 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.486243 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.486286 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.486295 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.486310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.486320 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.489180 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.502591 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.519004 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.535573 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.544920 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-cni-multus\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.544955 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-kubelet\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.544978 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-cni-bin\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.544993 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cnibin\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545010 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-hostroot\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545028 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-etc-kubernetes\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545045 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-system-cni-dir\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545073 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545095 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545102 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cnibin\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-kubelet\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545141 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-system-cni-dir\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545158 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-hostroot\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj2r9\" (UniqueName: \"kubernetes.io/projected/82fb0eec-adf5-4743-979d-6b7bf729e4f5-kube-api-access-qj2r9\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545184 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-etc-kubernetes\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-cni-bin\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545189 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-daemon-config\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545199 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-cni-multus\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545249 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-conf-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545291 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-conf-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545324 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-cnibin\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545352 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-cni-binary-copy\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545374 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-k8s-cni-cncf-io\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545396 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-netns\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545430 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-os-release\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545451 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1301b014-a9ed-4b29-8dc2-86c01d6bd13a-hosts-file\") pod \"node-resolver-9xm27\" (UID: \"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\") " pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545473 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-os-release\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545497 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-cni-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545518 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-socket-dir-parent\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545542 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhcbz\" (UniqueName: \"kubernetes.io/projected/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-kube-api-access-xhcbz\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545563 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545587 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-system-cni-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545612 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-multus-certs\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545633 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz4mz\" (UniqueName: \"kubernetes.io/projected/1301b014-a9ed-4b29-8dc2-86c01d6bd13a-kube-api-access-xz4mz\") pod \"node-resolver-9xm27\" (UID: \"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\") " pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545899 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-daemon-config\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545935 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-cni-binary-copy\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545950 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-cnibin\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546006 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546059 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546063 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-socket-dir-parent\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546093 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1301b014-a9ed-4b29-8dc2-86c01d6bd13a-hosts-file\") pod \"node-resolver-9xm27\" (UID: \"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\") " pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-cni-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546117 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-k8s-cni-cncf-io\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546125 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-system-cni-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546107 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-os-release\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546161 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-multus-certs\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546145 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-netns\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546185 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-os-release\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.547594 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.549857 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:52:21.224977285 +0000 UTC Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.561048 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.570156 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz4mz\" (UniqueName: \"kubernetes.io/projected/1301b014-a9ed-4b29-8dc2-86c01d6bd13a-kube-api-access-xz4mz\") pod \"node-resolver-9xm27\" (UID: \"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\") " pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.570164 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj2r9\" (UniqueName: \"kubernetes.io/projected/82fb0eec-adf5-4743-979d-6b7bf729e4f5-kube-api-access-qj2r9\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.570275 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhcbz\" (UniqueName: \"kubernetes.io/projected/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-kube-api-access-xhcbz\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.578605 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.588654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.588692 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.588703 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.588720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.588732 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.595178 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.614559 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.657310 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.683309 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.691594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.691622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.691630 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.691644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.691653 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.695228 4985 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.696170 4985 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.696316 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-58b4c7f79c-55gtf/status\": read tcp 38.102.83.195:51400->38.102.83.195:6443: use of closed network connection" Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.696864 4985 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.696900 4985 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.696952 4985 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.697155 4985 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.697193 4985 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.697207 4985 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.697239 4985 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.697282 4985 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.697615 4985 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.725701 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.736190 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.747292 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.755575 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.766518 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.793527 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.793557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.793566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.793581 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.793590 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.799422 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-rmr8h"] Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.799822 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.800673 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.802592 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zd8w7"] Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.803344 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.803733 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.803863 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.803950 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.804708 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.804750 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.806874 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.807430 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.807545 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.807679 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.807743 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.807806 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.807850 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.819125 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.835050 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.846472 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848604 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-node-log\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848682 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-mcd-auth-proxy-config\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848708 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-ovn\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848732 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-netd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsgxm\" (UniqueName: \"kubernetes.io/projected/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-kube-api-access-fsgxm\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848771 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-etc-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848798 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-var-lib-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848821 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-config\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848844 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktbbd\" (UniqueName: \"kubernetes.io/projected/bd7b8cde-d2fe-4842-857e-545172f5bd12-kube-api-access-ktbbd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848863 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-bin\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848882 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovn-node-metrics-cert\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848901 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-ovn-kubernetes\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848920 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848940 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-env-overrides\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848958 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-rootfs\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848983 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-log-socket\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849003 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-proxy-tls\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849026 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-systemd-units\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849043 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-script-lib\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849080 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-kubelet\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849303 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-netns\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849437 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-slash\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849486 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-systemd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849516 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.864945 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.884644 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.905394 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.905439 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.905453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.905472 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.905492 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.917480 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.934013 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.946861 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951158 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-node-log\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951242 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-mcd-auth-proxy-config\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951304 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsgxm\" (UniqueName: \"kubernetes.io/projected/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-kube-api-access-fsgxm\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951323 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-etc-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951340 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-ovn\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951359 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-netd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951376 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-config\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951392 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktbbd\" (UniqueName: \"kubernetes.io/projected/bd7b8cde-d2fe-4842-857e-545172f5bd12-kube-api-access-ktbbd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951409 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-var-lib-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951429 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovn-node-metrics-cert\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951448 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-bin\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951466 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-env-overrides\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951480 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-rootfs\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951497 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-ovn-kubernetes\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951515 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951535 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-log-socket\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951549 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-proxy-tls\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951567 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-systemd-units\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951583 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-script-lib\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951618 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-kubelet\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951634 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-netns\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951679 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-slash\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951694 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-systemd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951709 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951782 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951831 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-node-log\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.952335 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-rootfs\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.952525 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-mcd-auth-proxy-config\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.952578 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-ovn-kubernetes\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.952605 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.952634 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-log-socket\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.952722 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-var-lib-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.953079 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-etc-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.953171 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-ovn\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.953216 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-netd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954073 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-config\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954138 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-kubelet\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954177 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-systemd-units\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954694 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-script-lib\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954706 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-bin\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954744 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-slash\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954777 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-netns\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954804 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-systemd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.955305 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-env-overrides\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.959174 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovn-node-metrics-cert\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.960827 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-proxy-tls\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.969745 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.972572 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsgxm\" (UniqueName: \"kubernetes.io/projected/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-kube-api-access-fsgxm\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.974649 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktbbd\" (UniqueName: \"kubernetes.io/projected/bd7b8cde-d2fe-4842-857e-545172f5bd12-kube-api-access-ktbbd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.988726 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.008050 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.008000 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.008084 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.008096 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.008117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.008134 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.016380 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 18:08:39 +0000 UTC, rotation deadline is 2026-12-17 14:03:38.867978967 +0000 UTC Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.016467 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7747h49m57.8515148s for next certificate rotation Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.023873 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.039954 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.054371 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.071703 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.090047 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.106131 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.111417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.111457 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.111474 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.111492 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.111507 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.185064 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.195345 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:41 crc kubenswrapper[4985]: W0128 18:13:41.197566 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-9b52ae7410e044c3d62e8ba4c47f080402d59995b57fa01be5e4289793202084 WatchSource:0}: Error finding container 9b52ae7410e044c3d62e8ba4c47f080402d59995b57fa01be5e4289793202084: Status 404 returned error can't find the container with id 9b52ae7410e044c3d62e8ba4c47f080402d59995b57fa01be5e4289793202084 Jan 28 18:13:41 crc kubenswrapper[4985]: W0128 18:13:41.208078 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd7b8cde_d2fe_4842_857e_545172f5bd12.slice/crio-9117799cf1251ac2e6249271f6bb1afef404c88ff5ec539853a26094bc4a4ad3 WatchSource:0}: Error finding container 9117799cf1251ac2e6249271f6bb1afef404c88ff5ec539853a26094bc4a4ad3: Status 404 returned error can't find the container with id 9117799cf1251ac2e6249271f6bb1afef404c88ff5ec539853a26094bc4a4ad3 Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.215464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.215500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.215513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.215532 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.215544 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.277784 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.292395 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.311175 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.318508 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.318547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.318557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.318574 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.318590 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.325694 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.336758 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.352407 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.366625 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.383864 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.397663 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.413981 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.422496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.422555 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.422565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.422606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.422620 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.429936 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.448994 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.467287 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.478008 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerStarted","Data":"9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.478312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerStarted","Data":"83cfa349ea19eeb2ba4ee6c3e38baa19feef8e50da4261b453c9b301fec5d3a4"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.479429 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerStarted","Data":"5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.479463 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerStarted","Data":"8fa85938472cd987d53b9e4dfedafa96704cdaea57e22ced6e351648516dd147"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.480725 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9xm27" event={"ID":"1301b014-a9ed-4b29-8dc2-86c01d6bd13a","Type":"ContainerStarted","Data":"b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.480759 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9xm27" event={"ID":"1301b014-a9ed-4b29-8dc2-86c01d6bd13a","Type":"ContainerStarted","Data":"283ea2a50827490d010f9f715abf8898212189783504eb80387cce3f532818c9"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.482748 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13" exitCode=0 Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.482836 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.482920 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"9117799cf1251ac2e6249271f6bb1afef404c88ff5ec539853a26094bc4a4ad3"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.484536 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.484656 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"9b52ae7410e044c3d62e8ba4c47f080402d59995b57fa01be5e4289793202084"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.496154 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.507240 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.520439 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.521989 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.525042 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.525075 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.525088 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.525106 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.525118 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.537015 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.550144 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:03:42.983412122 +0000 UTC Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.557110 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.572277 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.588955 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.591059 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.601341 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.608091 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.627526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.627592 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.627604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.627622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.627633 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.636271 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.657437 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.695974 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.709795 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.730877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.730931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.730941 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.730991 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.731006 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.731656 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.746109 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.758041 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.778557 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.792453 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.804406 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.812989 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.817130 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.828877 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.833759 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.833904 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.833982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.834046 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.834105 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.845955 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.857718 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.861071 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.861214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.861279 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:13:49.861235462 +0000 UTC m=+40.687798323 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.861361 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.861422 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.861429 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:49.861409517 +0000 UTC m=+40.687972428 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.861525 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.861577 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:49.861566271 +0000 UTC m=+40.688129172 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.871127 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.888828 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.920558 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.936768 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.936799 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.936808 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.936824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.936833 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.950715 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.961969 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.962021 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962146 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962166 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962177 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962229 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:49.962215879 +0000 UTC m=+40.788778700 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962308 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962364 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962391 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962490 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:49.962463116 +0000 UTC m=+40.789025977 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.980763 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.012612 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.020565 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.039423 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.039478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.039495 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.039524 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.039544 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.067192 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.143564 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.143645 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.143669 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.143697 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.143717 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.246433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.246494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.246509 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.246533 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.246552 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.263949 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.263970 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.263970 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:42 crc kubenswrapper[4985]: E0128 18:13:42.264092 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:42 crc kubenswrapper[4985]: E0128 18:13:42.264246 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:42 crc kubenswrapper[4985]: E0128 18:13:42.264424 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.269969 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.283945 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.349782 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.350365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.350394 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.350427 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.350454 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.453163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.453215 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.453227 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.453246 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.453277 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.491341 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.491407 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.491422 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.491433 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.493235 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.494678 4985 generic.go:334] "Generic (PLEG): container finished" podID="82fb0eec-adf5-4743-979d-6b7bf729e4f5" containerID="5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73" exitCode=0 Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.494755 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerDied","Data":"5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.507322 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.519401 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.541172 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.550547 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:48:11.075731855 +0000 UTC Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.555814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.555845 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.555856 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.555875 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.555888 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.557186 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.577439 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.597621 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.611566 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.633418 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.661287 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.661560 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.661598 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.661610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.661629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.661642 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.673310 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.683964 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.696727 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.709214 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.722497 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.731717 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.743275 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.765049 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.765098 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.765109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.765126 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.765138 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.768954 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.809200 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.851352 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.868768 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.868812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.868822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.868839 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.868850 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.894347 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.930775 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.971032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.971070 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.971079 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.971092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.971105 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.972489 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.012841 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.053857 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.073779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.073826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.073841 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.073866 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.073883 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.095423 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.142470 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.177661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.177709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.177722 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.177741 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.177754 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.280889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.280948 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.280960 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.280978 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.280991 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.383960 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.383994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.384005 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.384020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.384029 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.443969 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-dlz95"] Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.444466 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.448933 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.450211 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.451050 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.451197 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.465339 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.474659 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc08b2fa-f391-4427-b450-d72953c4056b-host\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.474729 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrg9g\" (UniqueName: \"kubernetes.io/projected/fc08b2fa-f391-4427-b450-d72953c4056b-kube-api-access-lrg9g\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.474765 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fc08b2fa-f391-4427-b450-d72953c4056b-serviceca\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.483986 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.486339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.486387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.486401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.486421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.486436 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.500655 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.503652 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.503708 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.507510 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerStarted","Data":"42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.521392 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.532095 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.545139 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.550920 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:25:56.393115825 +0000 UTC Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.558949 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.574589 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.575191 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrg9g\" (UniqueName: \"kubernetes.io/projected/fc08b2fa-f391-4427-b450-d72953c4056b-kube-api-access-lrg9g\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.575337 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fc08b2fa-f391-4427-b450-d72953c4056b-serviceca\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.575422 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc08b2fa-f391-4427-b450-d72953c4056b-host\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.575497 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc08b2fa-f391-4427-b450-d72953c4056b-host\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.576704 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fc08b2fa-f391-4427-b450-d72953c4056b-serviceca\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.589333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.589396 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.589412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.589437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.589454 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.606820 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrg9g\" (UniqueName: \"kubernetes.io/projected/fc08b2fa-f391-4427-b450-d72953c4056b-kube-api-access-lrg9g\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.621455 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.641238 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.684539 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.693427 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.693480 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.693494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.693513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.693527 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.707196 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.750052 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.759699 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: W0128 18:13:43.779763 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc08b2fa_f391_4427_b450_d72953c4056b.slice/crio-6381b1fc62a6cf1f7a638a66ea8c21cb79b21eb32a67f421d5f93aeefe963701 WatchSource:0}: Error finding container 6381b1fc62a6cf1f7a638a66ea8c21cb79b21eb32a67f421d5f93aeefe963701: Status 404 returned error can't find the container with id 6381b1fc62a6cf1f7a638a66ea8c21cb79b21eb32a67f421d5f93aeefe963701 Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.791174 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.795906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.795976 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.795985 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.796004 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.796017 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.832202 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.869092 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.902117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.902162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.902175 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.902194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.902208 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.917139 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.949339 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.992278 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.005010 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.005054 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.005064 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.005082 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.005127 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.029533 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.072015 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.110159 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.110204 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.110216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.110236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.110277 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.112738 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.115172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.115216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.115227 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.115243 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.115276 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.133882 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.137159 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.137200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.137210 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.137226 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.137236 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.152842 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.153509 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.157832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.157882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.157893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.157910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.157922 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.170139 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.173815 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.173887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.173899 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.173919 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.173931 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.190060 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.191320 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.194404 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.194458 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.194469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.194487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.194504 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.206189 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.206352 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.212791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.212822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.212832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.212847 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.212859 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.229634 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.263998 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.264046 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.264074 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.264172 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.264301 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.264415 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.270058 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.309062 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.315853 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.315915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.315933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.315955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.315970 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.350457 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.418851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.418929 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.418947 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.418974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.418996 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.513204 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dlz95" event={"ID":"fc08b2fa-f391-4427-b450-d72953c4056b","Type":"ContainerStarted","Data":"7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.513625 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dlz95" event={"ID":"fc08b2fa-f391-4427-b450-d72953c4056b","Type":"ContainerStarted","Data":"6381b1fc62a6cf1f7a638a66ea8c21cb79b21eb32a67f421d5f93aeefe963701"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.525276 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.525314 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.525323 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.525338 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.525348 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.527355 4985 generic.go:334] "Generic (PLEG): container finished" podID="82fb0eec-adf5-4743-979d-6b7bf729e4f5" containerID="42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540" exitCode=0 Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.527412 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerDied","Data":"42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.531803 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.545946 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.551752 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:24:45.672538887 +0000 UTC Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.558658 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.575590 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.597774 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.610018 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.627935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.627981 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.627991 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.628010 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.628022 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.630147 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.668283 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.712026 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.730672 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.730718 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.730734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.730757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.730778 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.751442 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.804170 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.832981 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.834834 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.834905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.834924 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.834951 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.834972 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.872395 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.912674 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.937817 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.937858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.937868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.937887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.937898 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.954742 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.994957 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.028618 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.040535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.041025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.041038 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.041057 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.041068 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.069588 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.112482 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.144039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.144108 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.144120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.144144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.144163 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.149740 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.192101 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.233124 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.247655 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.247708 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.247721 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.247806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.247823 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.271196 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.327443 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.351628 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.351670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.351682 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.351700 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.351712 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.353137 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.392632 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.432474 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.455497 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.455565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.455589 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.455622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.455647 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.473640 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.543795 4985 generic.go:334] "Generic (PLEG): container finished" podID="82fb0eec-adf5-4743-979d-6b7bf729e4f5" containerID="07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0" exitCode=0 Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.543851 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerDied","Data":"07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.552414 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:30:17.154967493 +0000 UTC Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.559539 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.559563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.559571 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.559585 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.559594 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.561795 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.581268 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.600582 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.631791 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.663235 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.663291 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.663302 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.663319 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.663332 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.673684 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.713758 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.749088 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.766200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.766241 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.766292 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.766315 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.766329 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.788686 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.830299 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.869575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.869900 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.869994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.870100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.869867 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.870235 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.911070 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.955412 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.972954 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.973008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.973018 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.973036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.973049 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.988549 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.034993 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.076079 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.076143 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.076161 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.076189 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.076211 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.179205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.179287 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.179301 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.179321 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.179334 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.263435 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.263447 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:46 crc kubenswrapper[4985]: E0128 18:13:46.263620 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.263472 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:46 crc kubenswrapper[4985]: E0128 18:13:46.263881 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:46 crc kubenswrapper[4985]: E0128 18:13:46.263902 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.283234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.283341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.283368 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.283398 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.283421 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.386817 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.386893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.386918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.386950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.386975 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.490016 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.490089 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.490150 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.490191 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.490229 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.551431 4985 generic.go:334] "Generic (PLEG): container finished" podID="82fb0eec-adf5-4743-979d-6b7bf729e4f5" containerID="31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee" exitCode=0 Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.552528 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:28:03.139247913 +0000 UTC Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.552451 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerDied","Data":"31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.558098 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.574185 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.592639 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.592674 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.592684 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.592698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.592708 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.596705 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.614775 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.630042 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.648179 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.676282 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.694512 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.696346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.696409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.696456 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.696491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.696514 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.716726 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.730845 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.758206 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.778874 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.799556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.799621 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.799645 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.799675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.799697 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.802479 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.819506 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.833317 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.906151 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.906206 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.906218 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.906236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.906275 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.009908 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.010401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.010411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.010429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.010443 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.112791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.112840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.112850 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.112868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.112879 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.215766 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.215824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.215838 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.215858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.215870 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.320061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.320141 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.320158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.320186 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.320206 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.423344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.423399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.423410 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.423428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.423441 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.526549 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.526600 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.526609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.526625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.526635 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.553198 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:34:21.236578976 +0000 UTC Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.565445 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerStarted","Data":"27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.579958 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.596568 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.610226 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.624524 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.629043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.629090 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.629101 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.629121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.629136 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.639860 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.655983 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.675654 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.687739 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.701330 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.711443 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.729079 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.731883 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.731996 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.732021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.732050 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.732072 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.743887 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.765245 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.776960 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.834365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.834429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.834448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.834507 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.834530 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.938370 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.938439 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.938459 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.938489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.938515 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.042081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.042149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.042168 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.042194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.042217 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.145074 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.145122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.145139 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.145163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.145181 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.250040 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.250095 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.250109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.250138 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.250151 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.263079 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:48 crc kubenswrapper[4985]: E0128 18:13:48.263208 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.263686 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:48 crc kubenswrapper[4985]: E0128 18:13:48.263751 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.263937 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:48 crc kubenswrapper[4985]: E0128 18:13:48.264157 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.358143 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.358446 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.358469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.358489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.358519 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.462569 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.462727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.462795 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.462869 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.462974 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.554418 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:12:12.041127568 +0000 UTC Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.566580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.566644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.566668 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.566698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.566722 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.584012 4985 generic.go:334] "Generic (PLEG): container finished" podID="82fb0eec-adf5-4743-979d-6b7bf729e4f5" containerID="27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb" exitCode=0 Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.584191 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerDied","Data":"27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.592628 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.593579 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.593645 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.593675 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.605672 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.624133 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.628830 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.630311 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.643932 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.665040 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.673803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.673886 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.673911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.673947 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.673977 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.682221 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.697382 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.721192 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.735419 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.748642 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.759580 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.775174 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.777893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.777932 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.777944 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.777960 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.777972 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.790232 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.807456 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.818859 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.831323 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.844953 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.858010 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.870045 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.880916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.880945 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.880955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.880969 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.880979 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.887564 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.898384 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.908992 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.921241 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.936959 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.951995 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.966657 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.982222 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.984575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.984603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.984613 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.984633 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.984644 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.008614 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.018526 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.100560 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.100635 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.100649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.100671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.100687 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.203692 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.203734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.203746 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.203765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.203777 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.306896 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.306945 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.306957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.306984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.307026 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.413715 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.413796 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.413814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.413870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.413890 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.517566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.517648 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.517678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.517715 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.517742 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.554654 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:47:45.771917445 +0000 UTC Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.621382 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.621426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.621438 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.621455 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.621467 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.632913 4985 generic.go:334] "Generic (PLEG): container finished" podID="82fb0eec-adf5-4743-979d-6b7bf729e4f5" containerID="1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2" exitCode=0 Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.634653 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerDied","Data":"1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.658355 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.677549 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.695628 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.713423 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.723871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.723928 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.723943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.723963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.723976 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.731785 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.746882 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.762108 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.781695 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.797490 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.811179 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.823684 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.831087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.831117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.831126 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.831142 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.831152 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.837663 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.859107 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.870997 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.934092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.934133 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.934145 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.934163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.934178 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.958762 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:49 crc kubenswrapper[4985]: E0128 18:13:49.958973 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:14:05.95893794 +0000 UTC m=+56.785500771 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.959061 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.959102 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:49 crc kubenswrapper[4985]: E0128 18:13:49.959247 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:49 crc kubenswrapper[4985]: E0128 18:13:49.959283 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:49 crc kubenswrapper[4985]: E0128 18:13:49.959350 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:05.959331111 +0000 UTC m=+56.785893952 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:49 crc kubenswrapper[4985]: E0128 18:13:49.959417 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:05.959387443 +0000 UTC m=+56.785950294 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.036851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.036897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.036906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.036925 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.036952 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.059940 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.060005 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060144 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060165 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060176 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060226 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:06.060212405 +0000 UTC m=+56.886775226 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060610 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060621 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060629 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060650 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:06.060643547 +0000 UTC m=+56.887206368 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.140071 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.140135 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.140152 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.140186 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.140211 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.243655 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.243773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.243791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.243814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.243831 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.263146 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.263226 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.263300 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.263406 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.263603 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.263786 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.351621 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.352103 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.352336 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.352551 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.352723 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.456374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.456425 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.456437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.456454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.456463 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.555403 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:19:25.077420959 +0000 UTC Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.560354 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.560393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.560402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.560418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.560429 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.642019 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerStarted","Data":"9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.658072 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.662691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.662729 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.662747 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.662765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.662777 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.671515 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.688790 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.703949 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.721275 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.738772 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.754795 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.765924 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.765969 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.765981 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.765997 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.766009 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.773470 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.791097 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.806909 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.838149 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.854959 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.868521 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.868573 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.868585 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.868605 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.868619 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.874978 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.890350 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.970994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.971069 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.971086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.971111 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.971132 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.074164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.074235 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.074280 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.074309 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.074328 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.177502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.177582 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.177602 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.177626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.177644 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.280493 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.280738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.280758 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.280790 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.280813 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.286143 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.301925 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.324981 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.344146 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.363632 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.383599 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.384247 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.384375 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.384408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.384445 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.384487 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.409543 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.425903 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.443001 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.464304 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.477060 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.486871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.486902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.486912 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.486931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.486944 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.494861 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.509978 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.522627 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.556015 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:41:56.310940475 +0000 UTC Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.592566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.592634 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.592655 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.592683 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.592707 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.694927 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.694957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.694968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.694984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.694997 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.797554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.797594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.797604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.797620 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.797630 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.901144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.901197 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.901209 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.901228 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.901239 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.004123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.004181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.004190 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.004203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.004214 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.107497 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.107558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.107579 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.107605 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.107622 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.210060 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.210100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.210110 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.210125 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.210135 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.263618 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.263673 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.263760 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:52 crc kubenswrapper[4985]: E0128 18:13:52.263834 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:52 crc kubenswrapper[4985]: E0128 18:13:52.263926 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:52 crc kubenswrapper[4985]: E0128 18:13:52.264014 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.314053 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.314107 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.314126 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.314146 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.314160 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.417484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.417529 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.417541 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.417557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.417569 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.520717 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.520760 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.520769 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.520787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.520804 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.556227 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:40:01.481925316 +0000 UTC Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.623373 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.623408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.623422 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.623441 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.623453 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.727054 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.727117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.727137 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.727177 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.727197 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.830429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.830511 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.830535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.830567 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.830591 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.934147 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.934206 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.934224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.934247 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.934303 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.038221 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.038317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.038336 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.038392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.038414 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.142127 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.142194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.142211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.142235 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.142294 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.245808 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.245872 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.245890 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.245915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.245938 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.264418 4985 scope.go:117] "RemoveContainer" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.297128 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5"] Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.297930 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.300641 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.305443 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.319243 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.339674 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.349191 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.349245 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.349305 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.349332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.349352 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.360948 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.379931 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.397640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfjql\" (UniqueName: \"kubernetes.io/projected/300be08e-8565-45ad-a77e-ac1b90ff61e7-kube-api-access-dfjql\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.397771 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/300be08e-8565-45ad-a77e-ac1b90ff61e7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.397824 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/300be08e-8565-45ad-a77e-ac1b90ff61e7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.397851 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/300be08e-8565-45ad-a77e-ac1b90ff61e7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.403735 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.420310 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.432873 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.447982 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.451629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.451659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.451670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.451687 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.451725 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.467786 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.479869 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.496032 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.498880 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/300be08e-8565-45ad-a77e-ac1b90ff61e7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.498956 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/300be08e-8565-45ad-a77e-ac1b90ff61e7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.499004 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfjql\" (UniqueName: \"kubernetes.io/projected/300be08e-8565-45ad-a77e-ac1b90ff61e7-kube-api-access-dfjql\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.499082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/300be08e-8565-45ad-a77e-ac1b90ff61e7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.499916 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/300be08e-8565-45ad-a77e-ac1b90ff61e7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.500294 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/300be08e-8565-45ad-a77e-ac1b90ff61e7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.508695 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/300be08e-8565-45ad-a77e-ac1b90ff61e7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.531468 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.535906 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfjql\" (UniqueName: \"kubernetes.io/projected/300be08e-8565-45ad-a77e-ac1b90ff61e7-kube-api-access-dfjql\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.550810 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.555326 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.555362 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.555374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.555400 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.555414 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.556421 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:37:32.82175083 +0000 UTC Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.571669 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.588024 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.620167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.657169 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.657201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.657210 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.657224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.657235 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.658294 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" event={"ID":"300be08e-8565-45ad-a77e-ac1b90ff61e7","Type":"ContainerStarted","Data":"6e1cfe4fa0b27db4e6877b96a42c166a369da79cb02f1b71332dffbf069e637f"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.660198 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/0.log" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.662857 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af" exitCode=1 Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.662882 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.663485 4985 scope.go:117] "RemoveContainer" containerID="7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.684429 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.703681 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.720204 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.734854 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.753156 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.759549 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.759759 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.759885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.759994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.760111 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.785600 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"go:160\\\\nI0128 18:13:52.242611 6253 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:13:52.243400 6253 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 18:13:52.244091 6253 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:13:52.244143 6253 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:13:52.244165 6253 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:13:52.244176 6253 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:13:52.244181 6253 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 18:13:52.244193 6253 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 18:13:52.244200 6253 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:13:52.244211 6253 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 18:13:52.244192 6253 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:13:52.244228 6253 factory.go:656] Stopping watch factory\\\\nI0128 18:13:52.244239 6253 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.802030 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.820563 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.835369 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.849349 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.865892 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.868161 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.868218 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.868232 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.868272 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.868286 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.884375 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.902489 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.915918 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.932935 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.971071 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.971105 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.971119 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.971138 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.971150 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.074749 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.074832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.074845 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.074868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.074907 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.178713 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.178785 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.178810 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.178839 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.178861 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.263611 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.263650 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.263620 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.263814 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.263963 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.264168 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.282643 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.282690 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.282710 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.282735 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.282754 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.386423 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.386496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.386517 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.386550 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.386572 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.410386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.410451 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.410474 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.410502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.410523 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.437281 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.443245 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.443454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.444171 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.444415 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.444556 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.472959 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.478657 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.478726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.478746 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.478779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.478804 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.502337 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.507116 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.507196 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.507222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.507294 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.507322 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.526906 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.532653 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.532699 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.532714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.532736 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.532751 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.547842 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.548011 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.549988 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.550018 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.550029 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.550047 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.550059 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.556763 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 14:46:34.447603228 +0000 UTC Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.653444 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.653493 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.653504 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.653523 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.653536 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.676870 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/0.log" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.680640 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.683284 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.685589 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.756751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.756798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.756813 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.756836 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.756853 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.844776 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hrd6k"] Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.845781 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.845884 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.865350 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.865713 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.865734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.865745 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.865764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.865778 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.885661 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.898633 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.922879 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.924660 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6nz\" (UniqueName: \"kubernetes.io/projected/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-kube-api-access-ql6nz\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.924846 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.943413 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.962993 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.968670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.968703 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.968717 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.968738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.968753 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.981069 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.998620 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.016758 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.026395 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.026470 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql6nz\" (UniqueName: \"kubernetes.io/projected/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-kube-api-access-ql6nz\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:55 crc kubenswrapper[4985]: E0128 18:13:55.026681 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:55 crc kubenswrapper[4985]: E0128 18:13:55.026808 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:55.526779949 +0000 UTC m=+46.353342860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.030184 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.042126 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.044479 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql6nz\" (UniqueName: \"kubernetes.io/projected/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-kube-api-access-ql6nz\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.053043 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.069995 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"go:160\\\\nI0128 18:13:52.242611 6253 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:13:52.243400 6253 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 18:13:52.244091 6253 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:13:52.244143 6253 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:13:52.244165 6253 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:13:52.244176 6253 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:13:52.244181 6253 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 18:13:52.244193 6253 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 18:13:52.244200 6253 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:13:52.244211 6253 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 18:13:52.244192 6253 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:13:52.244228 6253 factory.go:656] Stopping watch factory\\\\nI0128 18:13:52.244239 6253 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.071516 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.071539 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.071549 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.071563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.071574 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.080148 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.094340 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.107996 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.173803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.173846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.173854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.173873 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.173886 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.275891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.275954 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.275969 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.275995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.276011 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.378327 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.378382 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.378397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.378419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.378432 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.480992 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.481439 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.481453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.481475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.481490 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.531812 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:55 crc kubenswrapper[4985]: E0128 18:13:55.532000 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:55 crc kubenswrapper[4985]: E0128 18:13:55.532069 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:56.532052872 +0000 UTC m=+47.358615693 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.557487 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:03:46.167743614 +0000 UTC Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.584947 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.584982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.584993 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.585007 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.585016 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.688379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.688412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.688423 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.688440 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.688453 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.691292 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" event={"ID":"300be08e-8565-45ad-a77e-ac1b90ff61e7","Type":"ContainerStarted","Data":"c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.691332 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" event={"ID":"300be08e-8565-45ad-a77e-ac1b90ff61e7","Type":"ContainerStarted","Data":"5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.691630 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.712126 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.728113 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.744282 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.754876 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.765721 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.781751 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.791422 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.791489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.791511 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.791543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.791566 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.796888 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.812423 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.828131 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.842091 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.855911 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.895213 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"go:160\\\\nI0128 18:13:52.242611 6253 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:13:52.243400 6253 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 18:13:52.244091 6253 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:13:52.244143 6253 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:13:52.244165 6253 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:13:52.244176 6253 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:13:52.244181 6253 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 18:13:52.244193 6253 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 18:13:52.244200 6253 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:13:52.244211 6253 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 18:13:52.244192 6253 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:13:52.244228 6253 factory.go:656] Stopping watch factory\\\\nI0128 18:13:52.244239 6253 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.896800 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.896857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.896878 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.896906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.896927 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.925582 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.980955 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.991439 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.999623 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.999682 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.999696 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.999716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.999730 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.003890 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.017459 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.032935 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.047461 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.063232 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.082325 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.094141 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.103233 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.103317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.103339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.103366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.103387 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.108504 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.120965 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.135428 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.147362 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.162356 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.177869 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.200325 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"go:160\\\\nI0128 18:13:52.242611 6253 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:13:52.243400 6253 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 18:13:52.244091 6253 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:13:52.244143 6253 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:13:52.244165 6253 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:13:52.244176 6253 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:13:52.244181 6253 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 18:13:52.244193 6253 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 18:13:52.244200 6253 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:13:52.244211 6253 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 18:13:52.244192 6253 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:13:52.244228 6253 factory.go:656] Stopping watch factory\\\\nI0128 18:13:52.244239 6253 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.206195 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.206239 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.206276 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.206302 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.206318 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.218373 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.233311 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.251566 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.264034 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.264106 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.264155 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.264067 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.264335 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.264451 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.264580 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.264682 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.308882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.308923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.308936 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.308953 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.308964 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.412070 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.412108 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.412118 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.412133 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.412145 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.515117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.515181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.515199 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.515226 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.515245 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.542310 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.542535 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.542622 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:58.542597978 +0000 UTC m=+49.369160839 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.558527 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 14:00:01.722522511 +0000 UTC Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.618616 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.618665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.618677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.618695 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.618707 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.697446 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/1.log" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.698121 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/0.log" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.701140 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202" exitCode=1 Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.701184 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.701290 4985 scope.go:117] "RemoveContainer" containerID="7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.702468 4985 scope.go:117] "RemoveContainer" containerID="f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202" Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.702756 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.721039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.721092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.721102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.721120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.721135 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.732348 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.748794 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.770408 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.787596 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.814321 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.824361 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.824413 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.824426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.824444 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.824457 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.836852 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.852076 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.867140 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.883411 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.900040 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.915477 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.927072 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.927096 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.927105 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.927120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.927129 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.939983 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"go:160\\\\nI0128 18:13:52.242611 6253 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:13:52.243400 6253 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 18:13:52.244091 6253 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:13:52.244143 6253 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:13:52.244165 6253 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:13:52.244176 6253 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:13:52.244181 6253 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 18:13:52.244193 6253 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 18:13:52.244200 6253 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:13:52.244211 6253 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 18:13:52.244192 6253 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:13:52.244228 6253 factory.go:656] Stopping watch factory\\\\nI0128 18:13:52.244239 6253 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.953054 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.966810 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.975889 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.988712 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.029584 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.029617 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.029629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.029648 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.029661 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.132632 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.132700 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.132717 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.132742 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.132760 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.236691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.236756 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.236774 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.236807 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.236829 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.340035 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.340123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.340141 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.340206 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.340226 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.443883 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.443959 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.443987 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.444021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.444045 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.547526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.547594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.547611 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.547636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.547654 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.558804 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:51:37.856259495 +0000 UTC Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.650576 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.650659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.650683 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.650717 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.650740 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.707909 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/1.log" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.713666 4985 scope.go:117] "RemoveContainer" containerID="f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202" Jan 28 18:13:57 crc kubenswrapper[4985]: E0128 18:13:57.714141 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.735344 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.754826 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.755557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.755732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.756031 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.756186 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.756401 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.791624 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.807766 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.839032 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.856059 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.860876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.860930 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.860944 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.860963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.860978 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.872715 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.897526 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.915747 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.936116 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.949382 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.964232 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.965241 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.965434 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.965522 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.965606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.965696 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.978722 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.996046 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.009906 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.026639 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.069970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.070716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.070812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.071057 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.071163 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.174341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.174401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.174419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.174447 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.174466 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.268149 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.268287 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:58 crc kubenswrapper[4985]: E0128 18:13:58.268357 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.268391 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:58 crc kubenswrapper[4985]: E0128 18:13:58.268543 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.268619 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:58 crc kubenswrapper[4985]: E0128 18:13:58.268789 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:13:58 crc kubenswrapper[4985]: E0128 18:13:58.268995 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.278630 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.278691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.278710 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.278735 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.278764 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.381828 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.381878 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.381891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.381911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.381925 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.485411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.485483 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.485508 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.485546 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.485568 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.559594 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:53:43.968095787 +0000 UTC Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.572738 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:58 crc kubenswrapper[4985]: E0128 18:13:58.572968 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:58 crc kubenswrapper[4985]: E0128 18:13:58.573091 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:02.57306474 +0000 UTC m=+53.399627591 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.588355 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.588415 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.588434 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.588462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.588479 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.691785 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.691857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.691879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.691909 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.691932 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.794780 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.794838 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.794856 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.794879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.794897 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.898462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.898547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.898573 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.898607 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.898629 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.000796 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.000846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.000859 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.000878 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.000892 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.103656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.103698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.103706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.103720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.103731 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.206567 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.206645 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.206670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.206709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.206734 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.309193 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.309275 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.309294 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.309318 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.309331 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.412579 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.412675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.412689 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.412708 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.412726 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.515862 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.515910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.515921 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.515940 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.515953 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.560379 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:32:11.053176374 +0000 UTC Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.620183 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.620280 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.620300 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.620333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.620352 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.725714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.725793 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.725840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.725871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.725902 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.833949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.834013 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.834032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.834058 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.834076 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.937990 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.938046 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.938063 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.938087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.938105 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.042500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.042564 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.042583 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.042610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.042632 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.145724 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.145792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.145812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.145836 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.145853 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.249543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.249605 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.249632 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.249664 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.249687 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.263984 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.264010 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:00 crc kubenswrapper[4985]: E0128 18:14:00.264173 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.264325 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.264360 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:00 crc kubenswrapper[4985]: E0128 18:14:00.264480 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:00 crc kubenswrapper[4985]: E0128 18:14:00.264727 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:00 crc kubenswrapper[4985]: E0128 18:14:00.264824 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.353642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.353707 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.353730 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.353760 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.353784 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.456906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.457373 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.457537 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.457732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.457872 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.560533 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:28:59.633906578 +0000 UTC Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.560826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.560847 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.560857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.560873 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.560884 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.663905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.663956 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.663972 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.663999 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.664033 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.768122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.768175 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.768192 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.768217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.768237 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.871202 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.871290 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.871311 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.871337 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.871358 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.974327 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.974379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.974392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.974414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.974426 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.077588 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.077668 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.077686 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.077711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.077729 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.181115 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.181172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.181188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.181211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.181232 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.280559 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.284159 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.284204 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.284226 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.284288 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.284314 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.299567 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.313807 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.335902 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.355515 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.376053 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.387025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.387073 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.387088 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.387113 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.387129 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.393106 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.410997 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.427926 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.444564 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.456017 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.467894 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.479213 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.489176 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.489224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.489242 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.489301 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.489321 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.491807 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.509549 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.524893 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.560892 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:04:44.288635943 +0000 UTC Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.592354 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.592399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.592410 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.592430 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.592444 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.695394 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.695484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.695511 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.695543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.695569 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.798312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.798387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.798405 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.798431 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.798452 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.902189 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.902704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.902721 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.902747 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.902768 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.006604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.006661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.006677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.006704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.006721 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.110104 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.110231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.110279 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.110304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.110323 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.213863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.213925 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.213943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.213967 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.213984 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.263858 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.263923 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.263923 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:02 crc kubenswrapper[4985]: E0128 18:14:02.264073 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:02 crc kubenswrapper[4985]: E0128 18:14:02.264226 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:02 crc kubenswrapper[4985]: E0128 18:14:02.264474 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.264539 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:02 crc kubenswrapper[4985]: E0128 18:14:02.264766 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.317424 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.317477 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.317494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.317518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.317535 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.412634 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.420003 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.420061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.420078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.420104 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.420122 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.523931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.524058 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.524095 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.524141 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.524187 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.561359 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:58:02.569340537 +0000 UTC Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.621342 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:02 crc kubenswrapper[4985]: E0128 18:14:02.621623 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:02 crc kubenswrapper[4985]: E0128 18:14:02.622044 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:10.622010491 +0000 UTC m=+61.448573352 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.628060 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.628129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.628143 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.628159 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.628170 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.731798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.731852 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.731867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.731894 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.731906 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.834787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.834857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.834880 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.834909 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.834930 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.938891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.938958 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.938974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.938998 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.939018 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.041868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.041949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.041971 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.042003 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.042026 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.145675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.145746 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.145768 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.145800 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.145823 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.248871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.249305 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.249486 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.249677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.249910 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.353349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.353397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.353406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.353421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.353430 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.456731 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.456791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.456806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.456831 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.456847 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.503943 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.512317 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.524998 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.542512 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.559557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.559594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.559606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.559626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.559639 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.561917 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:13:20.418214233 +0000 UTC Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.579683 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.593057 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.611517 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.626684 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.645993 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.662628 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.662699 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.662744 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.662774 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.662817 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.663942 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.678553 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.699725 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.714347 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.730385 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.753184 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.766006 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.766059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.766078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.766102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.766122 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.769473 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.827762 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.845195 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.869727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.869809 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.869826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.869850 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.869866 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.972572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.972639 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.972660 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.972688 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.972710 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.076089 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.076146 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.076166 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.076228 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.076279 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.179231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.179341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.179365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.179402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.179426 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.263387 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.263459 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.263400 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.264053 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.264363 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.264435 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.264502 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.264566 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.282673 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.283053 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.283216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.283399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.283578 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.386205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.386280 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.386298 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.386322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.386342 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.489945 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.490006 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.490024 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.490048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.490067 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.562389 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:12:40.928330821 +0000 UTC Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.592910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.592981 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.592999 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.593028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.593045 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.696404 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.696462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.696476 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.696501 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.696525 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.767782 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.768107 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.768201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.768322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.768549 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.784472 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:04Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.789167 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.789712 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.789737 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.789763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.789781 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.807769 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:04Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.812843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.812887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.812900 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.812921 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.812931 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.829990 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:04Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.836210 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.836310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.836329 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.836355 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.836373 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.853844 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:04Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.859521 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.859571 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.859585 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.859608 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.859647 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.872399 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:04Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.872537 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.874371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.874391 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.874399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.874415 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.874425 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.977517 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.977577 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.977591 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.977614 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.977629 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.081793 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.081854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.081867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.087706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.087905 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.192222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.192313 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.192333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.192359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.192378 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.295016 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.295078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.295095 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.295120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.295137 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.398690 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.398765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.398785 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.398812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.398831 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.502035 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.502100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.502116 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.502139 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.502159 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.562859 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:12:43.471683309 +0000 UTC Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.605002 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.605067 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.605090 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.605124 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.605152 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.708414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.708468 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.708493 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.708521 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.708545 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.811135 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.811200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.811211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.811231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.811243 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.914336 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.914409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.914432 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.914465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.914491 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.961175 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.961439 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:05 crc kubenswrapper[4985]: E0128 18:14:05.961516 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:14:37.961481294 +0000 UTC m=+88.788044145 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:14:05 crc kubenswrapper[4985]: E0128 18:14:05.961623 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.961707 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:05 crc kubenswrapper[4985]: E0128 18:14:05.961728 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:37.96170208 +0000 UTC m=+88.788264931 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:14:05 crc kubenswrapper[4985]: E0128 18:14:05.961833 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:14:05 crc kubenswrapper[4985]: E0128 18:14:05.961890 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:37.961877064 +0000 UTC m=+88.788439915 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.017334 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.017412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.017435 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.017465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.017487 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.062588 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.062677 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.062879 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.062881 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.062909 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.062927 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.062936 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.062945 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.063020 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:38.062995675 +0000 UTC m=+88.889558526 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.063065 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:38.063039676 +0000 UTC m=+88.889602537 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.120762 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.120823 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.120842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.120867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.120886 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.224323 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.224374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.224394 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.224419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.224436 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.263320 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.263374 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.263406 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.263330 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.263560 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.263704 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.263847 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.263968 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.327974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.328059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.328084 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.328128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.328154 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.431902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.431964 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.431983 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.432014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.432036 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.535926 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.535995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.536025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.536057 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.536081 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.564082 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 18:04:28.297588372 +0000 UTC Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.639679 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.639758 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.639783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.639815 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.639838 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.743453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.743533 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.743558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.743588 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.743609 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.846849 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.846918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.846935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.846963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.846982 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.950584 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.950654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.950678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.950708 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.950735 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.054340 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.054418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.054437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.054464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.054482 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.157464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.157596 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.157622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.157651 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.157669 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.261464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.261548 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.261573 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.261606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.261628 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.364534 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.364610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.364652 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.364692 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.364716 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.468155 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.468221 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.468325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.468361 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.468383 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.565032 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:57:55.99114387 +0000 UTC Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.572122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.572179 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.572195 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.572220 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.572237 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.691331 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.691427 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.691485 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.691510 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.691529 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.795358 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.795420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.795438 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.795464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.795483 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.899181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.899294 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.899317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.899343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.899361 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.002400 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.002475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.002497 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.002529 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.002556 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.105931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.106005 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.106028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.106061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.106086 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.209127 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.209535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.209685 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.209916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.210163 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.263287 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.263299 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.263423 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.263467 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:08 crc kubenswrapper[4985]: E0128 18:14:08.263674 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:08 crc kubenswrapper[4985]: E0128 18:14:08.263823 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:08 crc kubenswrapper[4985]: E0128 18:14:08.263961 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:08 crc kubenswrapper[4985]: E0128 18:14:08.264120 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.313631 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.313701 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.313726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.313757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.313779 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.416541 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.416624 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.416649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.416682 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.416711 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.520300 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.520374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.520389 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.520411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.520429 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.565774 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:57:57.972722402 +0000 UTC Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.623714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.623779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.623796 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.623823 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.623847 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.728045 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.728129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.728150 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.728183 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.728200 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.831289 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.831341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.831359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.831384 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.831406 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.937888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.937994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.938023 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.938065 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.938104 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.042406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.042458 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.042472 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.042491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.042502 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.145518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.145574 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.145587 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.145608 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.145620 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.248846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.248886 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.248895 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.248911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.248922 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.352229 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.352319 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.352339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.352364 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.352382 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.455736 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.455806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.455821 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.455841 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.455856 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.559192 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.559671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.559801 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.559949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.560082 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.566371 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:12:02.926958981 +0000 UTC Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.664592 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.664632 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.664644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.664664 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.664678 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.767767 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.767824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.767951 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.767974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.767989 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.871648 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.871722 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.871741 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.871769 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.871787 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.974600 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.974680 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.974706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.974740 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.974764 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.077701 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.077757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.077773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.077795 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.077809 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.181314 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.181375 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.181393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.181425 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.181444 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.263624 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.263648 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.263813 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:10 crc kubenswrapper[4985]: E0128 18:14:10.263856 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.263651 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:10 crc kubenswrapper[4985]: E0128 18:14:10.264030 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:10 crc kubenswrapper[4985]: E0128 18:14:10.264160 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:10 crc kubenswrapper[4985]: E0128 18:14:10.264388 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.265903 4985 scope.go:117] "RemoveContainer" containerID="f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.284494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.284644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.284721 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.284799 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.284892 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.388349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.388606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.388700 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.388788 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.388866 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.492843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.492899 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.492917 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.492948 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.492966 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.566933 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 11:20:59.975101739 +0000 UTC Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.596761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.596811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.596824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.596844 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.596930 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.627054 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:10 crc kubenswrapper[4985]: E0128 18:14:10.627286 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:10 crc kubenswrapper[4985]: E0128 18:14:10.627378 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:26.627353431 +0000 UTC m=+77.453916292 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.700528 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.700594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.700613 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.700640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.700664 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.770885 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/1.log" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.775413 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.775988 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.801120 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.803346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.803426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.803450 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.803469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.803481 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.823649 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.842576 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.863815 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.879308 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.893490 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.906512 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.906589 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.906607 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.906634 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.906653 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.911186 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.925183 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.940768 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.963766 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.981591 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.998031 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.009896 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.009999 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.010043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.010054 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.010081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.010095 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.019836 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.033752 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.045997 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.060203 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.113534 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.113587 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.113600 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.113619 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.113631 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.219947 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.220002 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.220020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.220044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.220065 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.284916 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.297907 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.313778 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.322973 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.323012 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.323025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.323045 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.323060 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.329284 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.344578 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.361902 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.377002 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.393655 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.406700 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.420657 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.425813 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.425862 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.425874 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.425893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.425906 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.437151 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.452486 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.471578 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.481644 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.496535 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.509084 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.520057 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.528638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.528666 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.528677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.528694 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.528706 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.567178 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:50:49.07882908 +0000 UTC Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.632611 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.632709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.632726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.632755 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.632773 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.736347 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.736421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.736440 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.736462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.736478 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.781091 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/2.log" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.782344 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/1.log" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.786803 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82" exitCode=1 Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.786880 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.786949 4985 scope.go:117] "RemoveContainer" containerID="f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.788021 4985 scope.go:117] "RemoveContainer" containerID="14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82" Jan 28 18:14:11 crc kubenswrapper[4985]: E0128 18:14:11.788570 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.806325 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.817865 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.830191 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.839827 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.839939 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.840211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.840230 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.840294 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.840317 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.852410 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.869112 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.888053 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.905116 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.921302 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.935607 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.943381 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.943445 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.943468 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.943498 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.943520 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.949912 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.961361 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.976980 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.993011 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.007773 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.022099 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.047369 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.047421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.047433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.047452 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.047467 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.052506 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.151086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.151128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.151151 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.151173 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.151185 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.253979 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.254066 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.254104 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.254139 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.254162 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.262965 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.263012 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.263035 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.263092 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:12 crc kubenswrapper[4985]: E0128 18:14:12.263174 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:12 crc kubenswrapper[4985]: E0128 18:14:12.263285 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:12 crc kubenswrapper[4985]: E0128 18:14:12.263586 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:12 crc kubenswrapper[4985]: E0128 18:14:12.263846 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.357772 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.357938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.357963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.358014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.358038 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.421178 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.441220 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.460910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.460972 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.460997 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.461028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.461054 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.475039 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.493683 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.515748 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.534537 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.555126 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.564416 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.564500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.564519 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.564547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.564566 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.567748 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 15:18:51.374950063 +0000 UTC Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.578478 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.599736 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.623799 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.645410 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.668020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.668086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.668103 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.668036 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.668128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.668396 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.690779 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.706690 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.726445 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.741793 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.756565 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.769936 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.771781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.771826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.771840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.771858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.771870 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.802136 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/2.log" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.812884 4985 scope.go:117] "RemoveContainer" containerID="14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82" Jan 28 18:14:12 crc kubenswrapper[4985]: E0128 18:14:12.813204 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.836197 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.849502 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.861626 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.874450 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.874491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.874502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.874519 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.874532 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.876147 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.890595 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.900957 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.919282 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.942536 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.977157 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.977201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.977211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.977229 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.977241 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.984530 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.996844 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.016186 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.025562 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.039407 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.058577 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.074781 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.079573 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.079622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.079632 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.079650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.079660 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.091911 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.108374 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.182894 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.182968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.182995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.183055 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.183079 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.286751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.286835 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.286860 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.286894 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.286918 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.390897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.390943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.390958 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.390997 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.391013 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.494726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.494790 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.494816 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.494848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.494871 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.568437 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:54:29.185243342 +0000 UTC Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.598339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.598430 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.598455 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.598480 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.598499 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.701628 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.701711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.701730 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.701757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.701783 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.805036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.805118 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.805138 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.805170 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.805192 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.908271 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.908331 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.908346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.908374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.908390 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.011388 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.011443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.011462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.011488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.011507 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.115149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.115214 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.115241 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.115293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.115318 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.218748 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.218811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.218825 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.218846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.218860 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.263058 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.263121 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.263225 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.263239 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.263291 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.263440 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.263600 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.263747 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.321649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.321715 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.321728 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.321756 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.321776 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.424128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.424163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.424171 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.424185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.424195 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.527201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.527392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.527414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.527438 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.527457 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.569638 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:50:39.421002 +0000 UTC Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.630329 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.630366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.630377 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.630395 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.630407 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.732921 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.732950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.732958 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.732971 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.732981 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.835931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.835957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.835966 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.835980 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.835989 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.903910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.903981 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.904001 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.904029 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.904048 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.923589 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:14Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.928679 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.928721 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.928738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.928763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.928780 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.949397 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:14Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.955783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.955824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.955837 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.955858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.955871 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.976485 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:14Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.981383 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.981417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.981429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.981449 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.981464 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: E0128 18:14:15.000416 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:14Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.005470 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.005514 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.005531 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.005553 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.005572 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: E0128 18:14:15.021744 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:15 crc kubenswrapper[4985]: E0128 18:14:15.021893 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.023833 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.023861 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.023874 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.023890 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.023901 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.127674 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.127742 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.127757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.127779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.127794 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.230978 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.231026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.231046 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.231074 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.231091 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.333865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.333906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.333917 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.333934 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.333947 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.436330 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.436374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.436383 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.436398 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.436409 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.539299 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.539594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.539636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.539669 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.539685 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.570924 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:24:08.274059605 +0000 UTC Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.642008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.642433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.642446 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.642464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.642478 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.745977 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.746016 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.746027 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.746043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.746054 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.849354 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.849428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.849462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.849489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.849506 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.953826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.953882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.953897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.953921 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.953943 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.057386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.057436 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.057448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.057468 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.057481 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.160604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.160670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.160687 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.160714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.160730 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.263507 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:16 crc kubenswrapper[4985]: E0128 18:14:16.263779 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.263992 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.264061 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.264239 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:16 crc kubenswrapper[4985]: E0128 18:14:16.264284 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:16 crc kubenswrapper[4985]: E0128 18:14:16.264548 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:16 crc kubenswrapper[4985]: E0128 18:14:16.264721 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.266778 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.266815 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.266828 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.266848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.266866 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.370032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.370087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.370101 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.370121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.370132 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.473099 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.473142 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.473152 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.473167 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.473178 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.571896 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:45:07.291829076 +0000 UTC Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.576679 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.576720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.576730 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.576747 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.576759 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.684516 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.684554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.684564 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.684580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.684592 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.787871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.787962 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.787988 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.788024 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.788055 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.891206 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.891358 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.891380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.891407 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.891426 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.994420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.994740 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.994939 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.995097 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.995240 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.099157 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.100021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.100227 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.100496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.100692 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.203811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.203845 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.203854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.203868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.203880 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.306834 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.306875 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.306887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.306902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.306913 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.410446 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.410527 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.410566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.410603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.410626 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.513713 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.513803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.513818 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.513862 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.513879 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.572089 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 18:15:40.297290557 +0000 UTC Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.616666 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.616761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.616787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.616820 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.616843 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.720089 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.720148 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.720160 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.720178 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.720191 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.823068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.823130 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.823151 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.823182 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.823205 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.926310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.926364 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.926382 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.926411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.926433 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.030028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.030199 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.030212 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.030227 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.030236 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.132797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.132844 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.132856 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.132874 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.132887 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.235824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.235877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.235893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.235918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.235938 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.263231 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.263303 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.263303 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.263360 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:18 crc kubenswrapper[4985]: E0128 18:14:18.263443 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:18 crc kubenswrapper[4985]: E0128 18:14:18.263589 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:18 crc kubenswrapper[4985]: E0128 18:14:18.263670 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:18 crc kubenswrapper[4985]: E0128 18:14:18.263709 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.338161 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.338275 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.338365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.338386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.338400 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.441223 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.441281 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.441294 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.441314 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.441330 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.543779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.543857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.543892 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.543912 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.543925 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.572942 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:03:26.818602208 +0000 UTC Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.647650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.647711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.647724 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.647739 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.647767 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.751014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.751059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.751098 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.751117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.751131 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.859613 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.859660 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.859670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.859691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.859703 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.962632 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.962680 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.962691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.962707 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.962718 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.065792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.065855 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.065868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.065893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.065907 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.169858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.169911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.169927 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.169947 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.169963 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.271813 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.271875 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.271889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.271905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.271917 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.374854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.374896 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.374905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.374923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.374934 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.477371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.477408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.477418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.477431 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.477441 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.573125 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:31:52.273059548 +0000 UTC Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.580650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.580688 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.580701 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.580722 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.580734 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.683572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.683628 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.683641 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.683661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.683675 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.786035 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.786071 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.786081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.786094 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.786105 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.889303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.889386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.889405 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.889433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.889452 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.992895 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.992950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.992963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.992982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.992996 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.095559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.095606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.095618 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.095636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.095650 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.198383 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.198441 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.198465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.198496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.198521 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.263590 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.263628 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:20 crc kubenswrapper[4985]: E0128 18:14:20.263730 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.263946 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:20 crc kubenswrapper[4985]: E0128 18:14:20.263984 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.264000 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:20 crc kubenswrapper[4985]: E0128 18:14:20.264220 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:20 crc kubenswrapper[4985]: E0128 18:14:20.264327 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.300806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.300851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.300863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.300879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.300892 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.403172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.403231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.403245 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.403286 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.403304 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.508437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.508478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.508493 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.508514 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.508529 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.573767 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:25:49.075299492 +0000 UTC Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.610698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.610744 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.610757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.610773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.610786 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.713700 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.713773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.713787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.713807 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.713823 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.816751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.816789 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.816799 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.816824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.816840 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.919765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.919812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.919824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.919846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.919859 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.021923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.021970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.021984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.022003 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.022018 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.124719 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.124772 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.124782 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.124805 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.124818 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.227409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.227462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.227676 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.227759 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.227832 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.279835 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.294312 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.316292 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.328827 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.330851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.330893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.330906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.330926 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.330941 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.348604 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.359651 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.374441 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.389786 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.403488 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.423366 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.435943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.435983 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.435992 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.436008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.436019 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.438108 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.449180 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.463802 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.477368 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.493188 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.507004 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.520813 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.539196 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.539293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.539312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.539332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.539348 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.574976 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:41:57.52558498 +0000 UTC Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.643202 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.643276 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.643293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.643320 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.643338 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.745826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.745876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.745888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.745907 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.745920 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.848230 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.848441 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.848514 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.848571 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.848587 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.952698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.952750 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.952764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.952783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.952795 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.056233 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.056638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.056735 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.056844 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.056939 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.160101 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.160138 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.160146 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.160162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.160171 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.262569 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.262609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.262620 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.262637 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.262651 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.263033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.263033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.263033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:22 crc kubenswrapper[4985]: E0128 18:14:22.263274 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.263115 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:22 crc kubenswrapper[4985]: E0128 18:14:22.263125 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:22 crc kubenswrapper[4985]: E0128 18:14:22.263353 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:22 crc kubenswrapper[4985]: E0128 18:14:22.263207 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.365033 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.365076 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.365086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.365100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.365110 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.467716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.467781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.467793 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.467812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.467827 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.570798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.570842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.570854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.570872 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.570887 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.575902 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 00:10:52.033356813 +0000 UTC Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.673371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.673401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.673411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.673442 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.673452 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.775919 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.776065 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.776084 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.776102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.776114 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.879217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.879320 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.879334 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.879357 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.879371 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.982153 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.982194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.982205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.982222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.982234 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.084961 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.085021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.085031 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.085047 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.085058 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.188141 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.188187 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.188196 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.188212 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.188224 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.291167 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.291217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.291228 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.291245 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.291278 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.394469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.394524 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.394536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.394556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.394573 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.497409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.497476 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.497487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.497530 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.497548 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.576018 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:09:04.040155691 +0000 UTC Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.600837 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.600894 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.600910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.600933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.600951 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.703485 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.703544 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.703559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.703582 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.703597 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.806101 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.806151 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.806163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.806181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.806192 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.909494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.909537 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.909548 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.909563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.909571 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.012484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.012536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.012559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.012576 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.012589 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.115588 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.115642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.115654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.115675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.115689 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.218429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.218488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.218499 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.218515 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.218526 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.263697 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.263745 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.263826 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.263855 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:24 crc kubenswrapper[4985]: E0128 18:14:24.263997 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:24 crc kubenswrapper[4985]: E0128 18:14:24.264069 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:24 crc kubenswrapper[4985]: E0128 18:14:24.264179 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:24 crc kubenswrapper[4985]: E0128 18:14:24.264286 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.322013 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.322075 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.322093 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.322111 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.322123 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.424709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.424783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.424805 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.424831 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.424848 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.527325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.527376 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.527390 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.527408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.527418 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.577079 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:39:49.85809525 +0000 UTC Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.629819 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.629857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.629869 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.629885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.629900 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.733777 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.733837 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.733850 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.733871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.733888 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.836473 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.836818 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.836925 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.836996 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.837063 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.940518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.940592 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.940608 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.940634 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.940650 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.042858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.042916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.042932 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.042956 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.042974 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.145416 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.145781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.145916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.146006 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.146101 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.248614 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.248664 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.248677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.248693 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.248703 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.309773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.309812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.309823 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.309837 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.309848 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: E0128 18:14:25.322424 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.326656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.326767 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.326876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.326978 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.327062 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: E0128 18:14:25.342383 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.349346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.349390 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.349408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.349430 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.349447 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: E0128 18:14:25.363462 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.368369 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.368396 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.368406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.368435 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.368445 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: E0128 18:14:25.380628 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.384783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.384814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.384826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.384846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.384862 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: E0128 18:14:25.397550 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:25 crc kubenswrapper[4985]: E0128 18:14:25.397704 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.399694 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.399741 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.399760 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.399778 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.399790 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.503155 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.503479 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.503550 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.503625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.503700 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.578108 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:58:49.586576544 +0000 UTC Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.606637 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.606695 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.606708 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.606725 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.606737 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.710099 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.710198 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.710759 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.710841 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.711209 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.813860 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.813890 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.813897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.813911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.813922 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.917092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.917135 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.917147 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.917164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.917175 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.019829 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.019867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.019875 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.019889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.019899 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.122888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.122925 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.122934 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.122949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.122960 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.225602 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.225638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.225648 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.225665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.225676 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.265336 4985 scope.go:117] "RemoveContainer" containerID="14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82" Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.265694 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.266056 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.266184 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.266429 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.266533 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.266741 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.266848 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.269607 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.269857 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.328769 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.328882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.328910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.328941 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.328964 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.431658 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.431694 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.431704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.431718 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.431728 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.535912 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.535992 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.536017 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.536048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.536073 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.579154 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:03:06.315057537 +0000 UTC Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.639216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.639337 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.639349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.639362 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.639372 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.695987 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.696152 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.696218 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:58.696200922 +0000 UTC m=+109.522763733 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.743399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.743462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.743478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.743500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.743517 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.846606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.846761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.846803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.846826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.846841 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.949334 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.949375 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.949388 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.949406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.949419 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.053078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.053137 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.053153 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.053178 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.053196 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.156344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.156518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.156554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.156584 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.156610 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.259898 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.259961 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.259972 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.259995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.260009 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.362542 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.362625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.362650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.362681 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.362705 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.465621 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.465673 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.465716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.465748 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.465767 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.568661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.568742 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.568766 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.568797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.568825 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.579468 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:43:51.32191099 +0000 UTC Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.671344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.671385 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.671396 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.671414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.671425 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.775066 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.775131 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.775144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.775163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.775177 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.878218 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.878347 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.878361 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.878381 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.878396 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.981560 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.981608 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.981625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.981649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.981668 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.084420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.084513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.084539 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.084572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.084595 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.188445 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.188515 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.188543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.188575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.188597 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.263230 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.263237 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.263341 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.263371 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:28 crc kubenswrapper[4985]: E0128 18:14:28.263695 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:28 crc kubenswrapper[4985]: E0128 18:14:28.264206 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:28 crc kubenswrapper[4985]: E0128 18:14:28.264339 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:28 crc kubenswrapper[4985]: E0128 18:14:28.264460 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.292391 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.292481 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.292501 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.292530 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.292554 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.399504 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.399571 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.399584 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.399602 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.399619 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.502725 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.502776 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.502792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.502813 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.502826 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.580011 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:18:51.911022586 +0000 UTC Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.605861 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.606213 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.606458 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.606672 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.606877 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.709974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.710029 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.710055 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.710082 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.710102 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.812622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.812931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.812995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.813064 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.813133 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.916917 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.916982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.917002 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.917028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.917047 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.021732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.021833 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.021853 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.021923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.021945 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.124935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.125009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.125025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.125043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.125055 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.228569 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.228641 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.228665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.228697 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.228722 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.332102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.332164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.332182 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.332208 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.332226 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.436059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.436142 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.436192 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.436218 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.436238 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.540688 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.541377 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.541413 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.541448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.541471 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.581188 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:18:30.071064521 +0000 UTC Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.645153 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.645219 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.645237 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.645299 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.645350 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.748794 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.748876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.748900 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.748951 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.748977 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.852605 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.852665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.852678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.852697 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.852711 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.956579 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.956609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.956620 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.956636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.956648 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.059625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.059668 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.059681 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.059698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.059710 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.162970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.163032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.163048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.163072 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.163089 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.263717 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.263732 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.263901 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.264509 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:30 crc kubenswrapper[4985]: E0128 18:14:30.264692 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:30 crc kubenswrapper[4985]: E0128 18:14:30.264923 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:30 crc kubenswrapper[4985]: E0128 18:14:30.265050 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:30 crc kubenswrapper[4985]: E0128 18:14:30.265132 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.267234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.267309 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.267322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.267345 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.267358 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.370203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.370320 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.370346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.370378 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.370400 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.474206 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.474340 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.474368 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.474401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.474425 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.578410 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.578473 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.578491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.578516 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.578536 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.581872 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:43:08.54746056 +0000 UTC Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.681506 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.681570 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.681587 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.681612 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.681629 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.785329 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.785387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.785401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.785421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.785437 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.880000 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/0.log" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.880073 4985 generic.go:334] "Generic (PLEG): container finished" podID="14fdd73a-b8dd-42da-88b4-2ccb314c4f7a" containerID="9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb" exitCode=1 Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.880117 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerDied","Data":"9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.880668 4985 scope.go:117] "RemoveContainer" containerID="9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.889396 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.889448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.889462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.889484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.889499 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.897668 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.918392 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.929716 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.943637 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.956356 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.969801 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.982198 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.995328 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.000461 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.000506 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.000518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.000535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.000549 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.011786 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.024836 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.041658 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.055997 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.067919 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.095835 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.104861 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.104893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.104902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.104918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.104929 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.108875 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.121196 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.133049 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.207692 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.207996 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.208132 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.208218 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.208307 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.278447 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.294399 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.310801 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.311293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.311409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.311491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.311559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.311625 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.324075 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.341266 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.354580 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.367169 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.387431 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.399024 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.410955 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.414404 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.414445 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.414454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.414473 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.414483 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.423500 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.435570 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.449334 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.460877 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.479954 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.491347 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.502521 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.517784 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.517829 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.517844 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.517877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.517891 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.582417 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:21:47.198515082 +0000 UTC Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.620935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.621001 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.621022 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.621051 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.621076 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.724531 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.724571 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.724581 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.724598 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.724611 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.828109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.828176 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.828202 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.828236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.828326 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.894109 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/0.log" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.894641 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerStarted","Data":"72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.915917 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.932665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.932738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.932754 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.932851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.932872 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.934883 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.951353 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.966130 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.983643 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.998757 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.016609 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.035840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.035916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.035935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.035964 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.035914 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.035984 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.058668 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.074293 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.092523 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.107664 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.122579 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.138273 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.139284 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.139472 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.139651 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.139753 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.139844 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.151823 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.170313 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.182146 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.242706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.242750 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.242760 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.242778 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.242788 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.263167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.263267 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.263197 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:32 crc kubenswrapper[4985]: E0128 18:14:32.263373 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:32 crc kubenswrapper[4985]: E0128 18:14:32.263471 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:32 crc kubenswrapper[4985]: E0128 18:14:32.263539 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.263721 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:32 crc kubenswrapper[4985]: E0128 18:14:32.263958 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.345414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.345464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.345482 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.345507 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.345524 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.449443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.449475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.449488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.449537 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.449547 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.552604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.552635 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.552644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.552659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.552671 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.583291 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:45:58.875266737 +0000 UTC Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.656543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.656612 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.656629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.656656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.656674 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.760677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.760758 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.760783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.760812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.760833 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.863552 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.863598 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.863610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.863631 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.863644 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.967418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.967475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.967486 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.967503 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.967517 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.069864 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.069922 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.069936 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.069955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.069968 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.173991 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.174038 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.174052 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.174086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.174108 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.276494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.276544 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.276557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.276582 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.276601 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.379808 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.379877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.379905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.379938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.379962 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.483021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.483128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.483149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.483177 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.483197 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.584086 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:41:08.635826757 +0000 UTC Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.586710 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.586797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.586816 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.586841 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.586858 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.690165 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.690281 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.690307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.690346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.690375 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.799293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.799363 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.799385 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.799418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.799440 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.902326 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.902378 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.902388 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.902406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.902418 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.006199 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.006343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.006371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.006406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.006429 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.109347 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.109406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.109418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.109441 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.109455 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.211971 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.212077 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.212096 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.212122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.212143 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.263685 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.263685 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.264334 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.264535 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:34 crc kubenswrapper[4985]: E0128 18:14:34.264843 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:34 crc kubenswrapper[4985]: E0128 18:14:34.264859 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:34 crc kubenswrapper[4985]: E0128 18:14:34.264561 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:34 crc kubenswrapper[4985]: E0128 18:14:34.264931 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.315605 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.315692 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.315712 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.315736 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.315755 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.419237 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.419371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.419403 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.419434 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.419459 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.523160 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.523220 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.523234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.523288 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.523307 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.584827 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 01:08:42.684486952 +0000 UTC Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.626414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.626465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.626476 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.626495 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.626507 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.729600 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.729656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.729672 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.729695 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.729710 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.833390 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.833459 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.833471 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.833495 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.833509 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.936586 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.936663 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.936678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.936702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.936721 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.040030 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.040102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.040120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.040147 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.040165 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.143669 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.143743 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.143761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.143788 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.143807 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.247205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.247328 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.247352 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.247388 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.247413 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.286726 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.351553 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.351623 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.351641 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.351666 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.351684 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.401568 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.401642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.401653 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.401673 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.401684 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: E0128 18:14:35.416335 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.422590 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.422626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.422637 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.422655 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.422671 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: E0128 18:14:35.437789 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.443351 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.443387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.443397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.443414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.443426 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: E0128 18:14:35.463564 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.471113 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.471407 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.471509 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.471630 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.471749 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: E0128 18:14:35.485987 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.492050 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.492187 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.492267 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.492333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.492404 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: E0128 18:14:35.506095 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:35 crc kubenswrapper[4985]: E0128 18:14:35.506493 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.508386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.508505 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.508572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.508642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.508708 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.585656 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:56:41.060752651 +0000 UTC Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.612005 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.612292 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.612355 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.612420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.612474 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.715825 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.715882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.715896 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.715925 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.715935 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.819508 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.819564 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.819581 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.819606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.819623 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.922023 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.922076 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.922093 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.922119 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.922136 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.025134 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.025231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.025274 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.025339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.025369 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.127762 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.127815 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.127827 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.127845 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.127860 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.230889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.230944 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.230963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.230991 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.231014 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.263007 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.263102 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.263022 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:36 crc kubenswrapper[4985]: E0128 18:14:36.263207 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.263149 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:36 crc kubenswrapper[4985]: E0128 18:14:36.263426 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:36 crc kubenswrapper[4985]: E0128 18:14:36.263583 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:36 crc kubenswrapper[4985]: E0128 18:14:36.263705 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.334295 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.334350 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.334361 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.334379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.334391 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.437792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.437849 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.437865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.437887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.437902 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.541407 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.541469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.541487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.541511 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.541534 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.586044 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 15:56:21.822497904 +0000 UTC Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.644446 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.644509 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.644527 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.644552 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.644570 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.748162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.748348 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.748380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.748412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.748439 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.851225 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.851291 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.851303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.851322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.851332 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.954220 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.954304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.954319 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.954343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.954357 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.058028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.058087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.058102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.058121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.058135 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.161656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.161843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.161861 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.161885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.161903 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.264341 4985 scope.go:117] "RemoveContainer" containerID="14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.265546 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.265621 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.265638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.265658 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.265668 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.368821 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.368882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.368891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.368911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.368923 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.472231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.472288 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.472302 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.472322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.472339 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.574864 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.574928 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.574944 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.574966 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.574984 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.586966 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:14:25.985471723 +0000 UTC Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.678829 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.678879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.678890 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.678908 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.678918 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.781212 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.781682 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.781700 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.781719 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.781733 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.884522 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.884574 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.884585 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.884604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.884617 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.918376 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/2.log" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.920813 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.921323 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.937798 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.952633 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.966893 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.978515 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.987378 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.987411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.987426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.987441 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.987454 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.994919 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.005886 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.029370 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70cf33cd-1921-458e-ba4d-2a9dcd994c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.041412 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.047669 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.047781 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.047833 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.047809013 +0000 UTC m=+152.874371834 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.047951 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.047988 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.048045 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.048107 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.048076251 +0000 UTC m=+152.874639082 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.048136 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.048122023 +0000 UTC m=+152.874684854 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.054198 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.068557 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.083767 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.090408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.090455 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.090465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.090484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.090499 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.099547 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.112229 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.135020 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.148836 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.148882 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149017 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149034 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149048 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149093 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.149075209 +0000 UTC m=+152.975638030 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149251 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149290 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149298 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149321 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.149313826 +0000 UTC m=+152.975876647 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.196094 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.196162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.196182 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.196203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.196238 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.198787 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.217197 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.230859 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.246331 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.264625 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.264698 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.264645 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.264832 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.264946 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.265015 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.265062 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.265105 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.298909 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.298937 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.298946 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.298961 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.298970 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.401671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.401753 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.401771 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.401794 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.401812 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.505993 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.506055 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.506072 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.506099 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.506117 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.588012 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 12:40:38.543634065 +0000 UTC Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.610463 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.610528 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.610546 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.610583 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.610608 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.713405 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.713443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.713452 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.713465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.713475 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.816641 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.816708 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.816727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.816765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.816786 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.919644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.919703 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.919720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.919745 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.919763 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.927802 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/3.log" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.928839 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/2.log" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.933494 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" exitCode=1 Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.933565 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.933638 4985 scope.go:117] "RemoveContainer" containerID="14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.934712 4985 scope.go:117] "RemoveContainer" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.935044 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.960675 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.980833 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.005157 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.022769 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.022851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.022865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.022885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.022903 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.027429 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.049918 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.070079 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.094555 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.118517 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.125933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.126017 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.126044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.126079 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.126107 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.140363 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.154725 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.172782 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70cf33cd-1921-458e-ba4d-2a9dcd994c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.191061 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:38Z\\\",\\\"message\\\":\\\"ormers/factory.go:160\\\\nI0128 18:14:38.290397 7118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.290712 7118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291004 7118 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:14:38.291093 7118 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291160 7118 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291886 7118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:38.291926 7118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:38.291950 7118 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:38.291961 7118 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:38.291984 7118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:38.291990 7118 factory.go:656] Stopping watch factory\\\\nI0128 18:14:38.292005 7118 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.203645 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.218866 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.228986 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.229009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.229020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.229039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.229053 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.232013 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.241699 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.254033 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.264724 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.332190 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.332233 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.332289 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.332325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.332350 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.435569 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.435619 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.435635 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.435658 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.435675 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.539527 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.539580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.539591 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.539613 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.539625 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.588736 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 19:39:18.166322547 +0000 UTC Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.642371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.642506 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.642526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.642554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.642573 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.746456 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.746871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.747039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.747255 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.747461 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.851820 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.851876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.851894 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.851919 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.851937 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.939941 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/3.log" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.945341 4985 scope.go:117] "RemoveContainer" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" Jan 28 18:14:39 crc kubenswrapper[4985]: E0128 18:14:39.945647 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.954780 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.954830 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.954850 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.954877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.954900 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.967343 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.986141 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.008449 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:38Z\\\",\\\"message\\\":\\\"ormers/factory.go:160\\\\nI0128 18:14:38.290397 7118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.290712 7118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291004 7118 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:14:38.291093 7118 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291160 7118 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291886 7118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:38.291926 7118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:38.291950 7118 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:38.291961 7118 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:38.291984 7118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:38.291990 7118 factory.go:656] Stopping watch factory\\\\nI0128 18:14:38.292005 7118 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.021949 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.040198 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.056660 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.057570 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.057622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.057637 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.057656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.057669 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.071621 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.086236 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.108335 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.129252 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.145188 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.161325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.161380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.161397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.161422 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.161438 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.162943 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.182559 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.199880 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.227642 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70cf33cd-1921-458e-ba4d-2a9dcd994c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.243707 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.254796 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263209 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263299 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263347 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263395 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:40 crc kubenswrapper[4985]: E0128 18:14:40.263415 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:40 crc kubenswrapper[4985]: E0128 18:14:40.263514 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:40 crc kubenswrapper[4985]: E0128 18:14:40.263601 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:40 crc kubenswrapper[4985]: E0128 18:14:40.263713 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263775 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263817 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263836 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263883 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.270706 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.367252 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.367336 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.367392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.367431 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.367473 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.470319 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.470369 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.470382 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.470402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.470416 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.573059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.573123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.573138 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.573161 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.573175 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.589935 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:30:54.09569034 +0000 UTC Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.676485 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.676572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.676599 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.676633 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.676660 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.779303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.779368 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.779392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.779432 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.779452 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.882427 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.882504 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.882517 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.882536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.882547 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.987250 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.987332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.987342 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.987359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.987370 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.090482 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.090536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.090548 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.090566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.090578 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.193443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.193495 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.193505 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.193524 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.193534 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.280926 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.296821 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.296919 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.296934 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.296964 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.296979 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.305813 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70cf33cd-1921-458e-ba4d-2a9dcd994c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.325348 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.340520 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.356166 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.371337 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.388959 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.399693 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.399783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.399809 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.399843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.399867 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.411476 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.430981 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.463346 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:38Z\\\",\\\"message\\\":\\\"ormers/factory.go:160\\\\nI0128 18:14:38.290397 7118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.290712 7118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291004 7118 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:14:38.291093 7118 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291160 7118 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291886 7118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:38.291926 7118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:38.291950 7118 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:38.291961 7118 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:38.291984 7118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:38.291990 7118 factory.go:656] Stopping watch factory\\\\nI0128 18:14:38.292005 7118 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.478517 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.496972 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.504224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.504306 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.504318 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.504338 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.504349 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.515910 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.530072 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.542483 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.554068 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.570804 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.584133 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.590137 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 09:10:29.835759676 +0000 UTC Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.607098 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.607164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.607180 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.607203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.607222 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.709175 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.709225 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.709236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.709272 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.709286 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.812712 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.812770 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.812783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.812799 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.812809 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.915976 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.916031 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.916045 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.916064 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.916076 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.019417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.019497 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.019523 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.019555 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.019582 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.122842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.122902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.122915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.122933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.122945 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.226081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.226133 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.226144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.226166 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.226179 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.263008 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.263149 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.263274 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.263306 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:42 crc kubenswrapper[4985]: E0128 18:14:42.263194 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:42 crc kubenswrapper[4985]: E0128 18:14:42.263467 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:42 crc kubenswrapper[4985]: E0128 18:14:42.263724 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:42 crc kubenswrapper[4985]: E0128 18:14:42.263853 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.328792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.328848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.328860 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.328879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.328889 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.432575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.432614 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.432626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.432641 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.432654 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.535996 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.536050 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.536078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.536097 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.536108 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.591187 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:01:24.823490125 +0000 UTC Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.639152 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.639201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.639215 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.639234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.639251 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.741698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.741760 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.741775 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.741803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.741820 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.845436 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.845487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.845498 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.845519 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.845532 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.949226 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.949410 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.949491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.949585 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.949672 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.052905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.053030 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.053050 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.053078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.053101 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.156053 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.156113 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.156125 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.156143 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.156157 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.259380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.259436 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.259454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.259485 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.259503 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.362535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.362581 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.362594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.362613 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.362651 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.465275 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.465317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.465327 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.465344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.465354 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.568237 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.568302 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.568315 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.568335 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.568347 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.592389 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:14:31.341134407 +0000 UTC Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.671291 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.671348 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.671363 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.671387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.671404 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.774033 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.774074 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.774086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.774102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.774114 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.876320 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.876379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.876393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.876412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.876433 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.979009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.979046 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.979055 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.979068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.979078 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.082227 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.082312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.082330 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.082356 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.082372 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.186124 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.186178 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.186191 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.186211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.186224 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.264371 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.264506 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:44 crc kubenswrapper[4985]: E0128 18:14:44.264556 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.264372 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:44 crc kubenswrapper[4985]: E0128 18:14:44.264708 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:44 crc kubenswrapper[4985]: E0128 18:14:44.264897 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.264988 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:44 crc kubenswrapper[4985]: E0128 18:14:44.265379 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.289317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.289360 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.289370 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.289386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.289396 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.392438 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.392512 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.392531 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.393043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.393101 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.497235 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.497325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.497342 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.497367 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.497387 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.592860 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 10:20:32.212325996 +0000 UTC Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.600453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.600511 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.600526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.600547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.600565 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.704691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.704756 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.704773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.704798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.704816 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.808395 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.808478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.808541 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.808572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.808593 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.912293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.912362 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.912379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.912409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.912427 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.015606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.015702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.015732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.015775 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.015807 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.119465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.119533 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.119543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.119563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.119578 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.223122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.223194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.223212 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.223246 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.223310 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.325846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.325906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.325924 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.325943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.325958 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.429943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.430025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.430043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.430071 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.430094 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.533781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.533915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.533936 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.533966 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.533984 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.593414 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:00:49.208729713 +0000 UTC Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.637687 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.637749 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.637771 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.637797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.637815 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.740608 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.740669 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.740686 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.740711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.740729 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.743858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.743938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.743956 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.743982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.744001 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: E0128 18:14:45.765862 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.771805 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.771911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.771967 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.772017 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.772042 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: E0128 18:14:45.795871 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.801796 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.801868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.801891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.801926 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.801950 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: E0128 18:14:45.822206 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.827066 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.827126 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.827144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.827172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.827189 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: E0128 18:14:45.848517 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.856342 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.856420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.856456 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.856491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.856514 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: E0128 18:14:45.876961 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:45 crc kubenswrapper[4985]: E0128 18:14:45.877134 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.880443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.880487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.880502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.880527 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.880542 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.982978 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.983016 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.983027 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.983044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.983056 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.086893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.087386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.087534 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.087669 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.087791 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.191089 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.191450 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.191575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.191674 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.191749 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.263955 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.263960 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:46 crc kubenswrapper[4985]: E0128 18:14:46.264109 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.263970 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:46 crc kubenswrapper[4985]: E0128 18:14:46.264284 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.264446 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:46 crc kubenswrapper[4985]: E0128 18:14:46.264586 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:46 crc kubenswrapper[4985]: E0128 18:14:46.264441 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.294465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.294516 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.294535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.294556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.294574 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.398000 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.398561 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.398734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.398937 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.399107 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.503744 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.503797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.503814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.503846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.503870 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.593836 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:20:32.871072433 +0000 UTC Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.607704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.607949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.608079 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.608215 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.608393 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.712292 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.712373 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.712400 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.712432 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.712456 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.816081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.816171 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.816201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.816237 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.816298 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.919797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.920201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.920499 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.920744 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.920959 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.024623 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.024674 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.024691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.024716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.024734 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.127349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.127420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.127473 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.127502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.127521 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.231023 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.231091 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.231109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.231135 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.231153 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.334716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.335294 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.335513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.335730 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.335918 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.439695 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.440122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.440303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.440561 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.440756 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.543659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.543711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.543722 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.543740 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.543755 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.594537 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:07:11.387238924 +0000 UTC Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.649323 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.649636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.649716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.649802 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.649937 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.752972 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.753151 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.753236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.753406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.753446 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.856223 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.856280 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.856293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.856310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.856322 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.959028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.959088 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.959105 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.959129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.959146 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.063066 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.063154 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.063181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.063220 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.063304 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.167742 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.167803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.167820 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.167845 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.167864 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.263366 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.263490 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.263534 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.263559 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:48 crc kubenswrapper[4985]: E0128 18:14:48.264045 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:48 crc kubenswrapper[4985]: E0128 18:14:48.264186 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:48 crc kubenswrapper[4985]: E0128 18:14:48.264355 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:48 crc kubenswrapper[4985]: E0128 18:14:48.264457 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.271132 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.271169 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.271185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.271207 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.271224 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.279758 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.374919 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.374979 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.375003 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.375032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.375054 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.478020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.478091 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.478109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.478136 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.478155 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.581915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.581971 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.581984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.582003 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.582016 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.595112 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:35:46.724707907 +0000 UTC Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.685421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.685454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.685463 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.685477 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.685487 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.787727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.787764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.787773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.787787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.787797 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.891554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.891594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.891604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.891627 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.891640 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.994286 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.994348 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.994366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.994391 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.994408 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.097375 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.097433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.097445 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.097469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.097487 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.200914 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.200968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.200981 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.200998 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.201011 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.303560 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.303609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.303626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.303643 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.303655 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.407360 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.407414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.407428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.407448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.407463 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.511082 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.511184 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.511202 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.511228 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.511286 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.596163 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:27:35.095159805 +0000 UTC Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.615318 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.615429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.615453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.615483 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.615505 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.719204 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.719276 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.719291 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.719312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.719328 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.822873 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.822929 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.822945 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.822967 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.822990 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.927316 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.927385 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.927402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.927426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.927445 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.031100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.031149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.031162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.031182 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.031192 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.134385 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.134472 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.134496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.134526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.134548 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.238077 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.238147 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.238165 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.238190 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.238209 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.263947 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.263998 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.264122 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.264222 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:50 crc kubenswrapper[4985]: E0128 18:14:50.264226 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:50 crc kubenswrapper[4985]: E0128 18:14:50.264363 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:50 crc kubenswrapper[4985]: E0128 18:14:50.264472 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:50 crc kubenswrapper[4985]: E0128 18:14:50.264639 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.341338 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.341386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.341397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.341415 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.341429 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.444766 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.444819 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.444835 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.444859 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.444876 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.547720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.547763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.547774 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.547791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.547804 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.597318 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:48:50.343595964 +0000 UTC Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.651486 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.651556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.651575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.651603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.651626 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.754590 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.754665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.754682 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.754715 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.755011 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.858370 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.858457 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.858488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.858524 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.858551 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.962368 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.962480 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.962505 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.962540 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.962562 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.066709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.067207 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.067315 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.067417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.067503 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.171064 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.171139 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.171158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.171183 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.171203 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.274629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.274691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.274709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.274732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.274749 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.287407 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.307181 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.331357 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.348397 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.365994 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.378325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.378380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.378401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.378440 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.378463 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.382549 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.396320 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.429825 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70cf33cd-1921-458e-ba4d-2a9dcd994c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.448696 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be38081c-43d9-4241-aea1-a14fb312a0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f697b1c16bcd1e36101e6b455b45641dbffe1cbf333e78f6a61de9228652f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67bc07dc45b6a6e977056c19d50bc4d8bee92234b25b1f67975101c4a295d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67bc07dc45b6a6e977056c19d50bc4d8bee92234b25b1f67975101c4a295d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.471468 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.482832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.482914 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.482939 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.482974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.482997 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.495419 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.520398 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.542430 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.564880 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.591861 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.591983 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.592003 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.592033 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.592052 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.598111 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:46:24.918812857 +0000 UTC Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.609190 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:38Z\\\",\\\"message\\\":\\\"ormers/factory.go:160\\\\nI0128 18:14:38.290397 7118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.290712 7118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291004 7118 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:14:38.291093 7118 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291160 7118 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291886 7118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:38.291926 7118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:38.291950 7118 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:38.291961 7118 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:38.291984 7118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:38.291990 7118 factory.go:656] Stopping watch factory\\\\nI0128 18:14:38.292005 7118 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.642506 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.657093 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.671790 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.688953 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.694706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.694773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.694791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.694817 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.694837 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.797974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.798017 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.798026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.798041 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.798051 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.901200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.901316 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.901340 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.901374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.901397 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.005304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.005367 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.005383 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.005408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.005427 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.108549 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.108606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.108620 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.108639 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.108655 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.211069 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.211509 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.211812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.212008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.212193 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.263431 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.263481 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.263489 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.263459 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:52 crc kubenswrapper[4985]: E0128 18:14:52.263642 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:52 crc kubenswrapper[4985]: E0128 18:14:52.263763 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:52 crc kubenswrapper[4985]: E0128 18:14:52.263841 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:52 crc kubenswrapper[4985]: E0128 18:14:52.263936 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.315513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.315611 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.315639 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.315675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.315705 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.420417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.420510 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.420531 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.420559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.420578 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.523890 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.523927 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.523936 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.523950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.523959 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.599124 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:17:17.146800271 +0000 UTC Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.627502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.627547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.627559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.627576 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.627594 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.731216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.731335 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.731376 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.731416 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.731442 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.833973 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.834015 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.834026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.834044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.834056 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.936822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.936902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.936923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.936954 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.936977 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.039582 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.039642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.039660 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.039683 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.039702 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.142746 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.142822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.142848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.142881 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.142905 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.247061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.247142 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.247162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.247194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.247214 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.350320 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.350398 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.350435 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.350468 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.350498 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.454034 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.454130 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.454154 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.454185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.454207 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.556891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.556953 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.556968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.556989 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.557005 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.599999 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:03:23.726794769 +0000 UTC Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.660414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.660488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.660513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.660542 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.660562 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.763048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.763101 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.763119 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.763154 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.763193 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.866583 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.866652 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.866675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.866696 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.866708 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.968953 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.968998 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.969008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.969026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.969040 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.072242 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.072308 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.072319 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.072337 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.072350 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.176137 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.176201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.176295 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.176319 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.176332 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.263749 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.263894 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:54 crc kubenswrapper[4985]: E0128 18:14:54.264028 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.264085 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.264199 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:54 crc kubenswrapper[4985]: E0128 18:14:54.264371 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:54 crc kubenswrapper[4985]: E0128 18:14:54.264459 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:54 crc kubenswrapper[4985]: E0128 18:14:54.265142 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.265144 4985 scope.go:117] "RemoveContainer" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" Jan 28 18:14:54 crc kubenswrapper[4985]: E0128 18:14:54.265540 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.278767 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.278811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.278822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.278840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.278855 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.383341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.383426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.383443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.383470 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.383488 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.486802 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.486870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.486879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.486896 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.486909 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.590812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.590888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.590906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.590930 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.590948 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.601058 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 18:30:08.969259552 +0000 UTC Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.694217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.694292 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.694305 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.694326 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.694343 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.798220 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.798317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.798343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.798373 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.798395 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.902068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.902152 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.902170 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.902199 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.902223 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.005870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.005938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.005958 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.005985 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.006007 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.109386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.109519 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.109540 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.109568 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.109587 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.213332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.213402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.213420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.213444 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.213462 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.316585 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.316656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.316680 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.316707 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.316728 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.419664 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.419745 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.419767 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.419798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.419822 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.522944 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.523024 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.523045 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.523073 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.523134 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.601447 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:18:54.760095322 +0000 UTC Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.626762 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.626814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.626831 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.626855 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.626874 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.730365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.730435 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.730459 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.730597 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.730732 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.834453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.834518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.834536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.834558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.834576 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.938367 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.938451 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.938470 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.938494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.938510 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.040121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.040205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.040224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.040284 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.040303 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.064525 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.069334 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.069386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.069399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.069417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.069429 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.088444 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.137412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.137620 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.137693 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.137776 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.137850 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.152690 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.158312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.158372 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.158386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.158406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.158418 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.171186 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.175188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.175224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.175238 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.175282 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.175298 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.188712 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.188866 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.190659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.190688 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.190698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.190719 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.190732 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.263778 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.263822 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.263841 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.263930 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.264201 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.264361 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.264470 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.264644 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.294623 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.294661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.294671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.294689 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.294703 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.397567 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.397626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.397638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.397656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.397668 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.501426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.501487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.501511 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.501546 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.501568 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.602651 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:11:51.028489999 +0000 UTC Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.605475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.605539 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.605563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.605592 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.605613 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.709222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.709292 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.709303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.709321 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.709335 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.812476 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.812535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.812554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.812582 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.812605 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.916710 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.916789 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.916819 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.916854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.916880 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.020736 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.020819 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.020840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.020869 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.020894 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.124036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.124092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.124103 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.124122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.124140 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.227802 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.227859 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.227875 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.227897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.227912 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.331234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.331336 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.331358 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.331392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.331415 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.435358 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.435419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.435442 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.435474 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.435497 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.539120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.539181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.539198 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.539223 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.539243 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.602997 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:27:20.366250282 +0000 UTC Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.642563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.642650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.642675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.642702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.642724 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.745763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.745848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.745876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.745906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.745928 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.853066 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.853123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.853141 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.853163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.853181 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.956178 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.956235 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.956268 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.956289 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.956301 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.063986 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.064136 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.064162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.064192 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.064213 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.168306 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.168421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.168437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.168462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.168478 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.263747 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.263891 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.263897 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:58 crc kubenswrapper[4985]: E0128 18:14:58.264051 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:58 crc kubenswrapper[4985]: E0128 18:14:58.264246 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:58 crc kubenswrapper[4985]: E0128 18:14:58.264360 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.264389 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:58 crc kubenswrapper[4985]: E0128 18:14:58.264464 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.271949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.271983 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.271994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.272009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.272022 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.375026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.375081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.375096 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.375152 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.375168 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.477918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.477994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.478010 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.478032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.478047 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.580553 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.580624 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.580636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.580678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.580691 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.603914 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:51:11.94536959 +0000 UTC Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.686601 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.686668 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.686684 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.686706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.686721 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.730063 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:58 crc kubenswrapper[4985]: E0128 18:14:58.730271 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:58 crc kubenswrapper[4985]: E0128 18:14:58.730369 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:16:02.730341117 +0000 UTC m=+173.556903948 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.789819 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.789878 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.789887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.789907 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.789919 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.893489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.893579 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.893604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.893634 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.893656 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.996517 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.996556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.996568 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.996610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.996622 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.100238 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.100335 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.100350 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.100377 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.100402 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.204134 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.204188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.204200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.204219 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.204231 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.307578 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.307733 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.307761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.307792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.307820 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.411316 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.411380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.411398 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.411422 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.411440 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.515246 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.515370 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.515389 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.515419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.515439 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.604609 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 18:57:17.314837261 +0000 UTC Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.618393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.618495 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.618520 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.618556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.618577 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.722143 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.722299 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.722339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.722380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.722422 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.825121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.825188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.825204 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.825232 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.825290 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.928867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.928927 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.928951 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.928982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.929118 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.031564 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.031619 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.031636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.031656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.031672 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.140884 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.140944 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.140961 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.140989 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.141114 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.255145 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.255222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.255303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.255333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.255357 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.263466 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.263489 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.263502 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.263489 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:00 crc kubenswrapper[4985]: E0128 18:15:00.263626 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:00 crc kubenswrapper[4985]: E0128 18:15:00.263700 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:00 crc kubenswrapper[4985]: E0128 18:15:00.263880 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:00 crc kubenswrapper[4985]: E0128 18:15:00.264202 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.357423 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.357500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.357519 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.357549 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.357573 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.460493 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.460536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.460547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.460565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.460577 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.563791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.563876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.563888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.563915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.563929 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.605040 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:19:32.878200761 +0000 UTC Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.666555 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.666612 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.666624 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.666642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.666657 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.769557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.769595 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.769603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.769617 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.769627 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.872792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.872862 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.872885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.872911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.872930 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.976139 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.976236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.976303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.976341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.976365 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.079515 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.079968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.080063 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.080121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.080241 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.183554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.183601 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.183617 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.183646 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.183668 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.282463 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.286109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.286188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.286213 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.286245 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.286368 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.300630 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.321607 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.336450 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.352750 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.368593 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.389848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.389891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.389901 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.389918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.389929 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.393669 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70cf33cd-1921-458e-ba4d-2a9dcd994c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.405581 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be38081c-43d9-4241-aea1-a14fb312a0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f697b1c16bcd1e36101e6b455b45641dbffe1cbf333e78f6a61de9228652f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67bc07dc45b6a6e977056c19d50bc4d8bee92234b25b1f67975101c4a295d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67bc07dc45b6a6e977056c19d50bc4d8bee92234b25b1f67975101c4a295d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.424117 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.440195 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.458695 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.475420 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.493205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.493269 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.493280 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.493299 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.493310 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.493597 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.507498 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.526861 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:38Z\\\",\\\"message\\\":\\\"ormers/factory.go:160\\\\nI0128 18:14:38.290397 7118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.290712 7118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291004 7118 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:14:38.291093 7118 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291160 7118 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291886 7118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:38.291926 7118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:38.291950 7118 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:38.291961 7118 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:38.291984 7118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:38.291990 7118 factory.go:656] Stopping watch factory\\\\nI0128 18:14:38.292005 7118 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.538873 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.556476 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.570228 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.584074 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.595543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.595598 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.595607 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.595638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.595652 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.606043 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:18:32.893060577 +0000 UTC Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.698073 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.698127 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.698147 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.698176 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.698199 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.800988 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.801049 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.801068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.801092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.801110 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.906422 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.907852 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.907877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.907895 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.907907 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.010654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.010702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.010711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.010726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.010736 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.114034 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.114083 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.114095 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.114115 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.114132 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.216449 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.216490 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.216499 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.216516 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.216529 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.263674 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.263720 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.263717 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.263821 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:02 crc kubenswrapper[4985]: E0128 18:15:02.263850 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:02 crc kubenswrapper[4985]: E0128 18:15:02.263978 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:02 crc kubenswrapper[4985]: E0128 18:15:02.264062 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:02 crc kubenswrapper[4985]: E0128 18:15:02.264145 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.319678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.319735 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.319754 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.319781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.319803 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.422871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.422938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.422955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.422980 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.422997 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.526181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.526457 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.526470 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.526486 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.526500 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.607018 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:35:55.479402302 +0000 UTC Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.630123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.630175 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.630186 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.630203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.630216 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.733911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.733957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.733966 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.733984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.733995 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.837121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.837191 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.837214 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.837278 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.837307 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.940165 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.940234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.940278 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.940304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.940325 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.043038 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.043104 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.043121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.043145 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.043160 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.145933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.146005 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.146026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.146060 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.146081 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.250162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.250216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.250231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.250270 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.250284 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.353208 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.353241 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.353262 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.353275 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.353284 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.456179 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.456296 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.456323 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.456355 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.456377 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.559321 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.559375 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.559388 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.559407 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.559421 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.607446 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:54:08.135925844 +0000 UTC Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.663640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.663706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.663724 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.663748 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.663766 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.767004 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.767072 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.767091 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.767116 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.767138 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.870489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.870579 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.870605 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.870640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.870663 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.973241 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.973326 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.973343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.973365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.973379 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.076247 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.076349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.076366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.076391 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.076410 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.180117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.180185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.180198 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.180217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.180233 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.263467 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.263592 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:04 crc kubenswrapper[4985]: E0128 18:15:04.263672 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.263755 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:04 crc kubenswrapper[4985]: E0128 18:15:04.263807 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.263934 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:04 crc kubenswrapper[4985]: E0128 18:15:04.264002 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:04 crc kubenswrapper[4985]: E0128 18:15:04.264187 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.283014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.283076 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.283098 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.283133 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.283157 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.387717 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.387803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.387828 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.387863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.387886 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.491496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.491589 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.491612 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.491642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.491665 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.594580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.594647 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.594670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.594701 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.594723 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.608051 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:34:20.236524035 +0000 UTC Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.699312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.699379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.699397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.699422 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.699442 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.807643 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.807720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.807764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.807805 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.807830 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.911397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.911462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.911481 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.911505 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.911524 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.013907 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.013942 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.013951 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.013966 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.013976 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.116226 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.116307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.116321 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.116344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.116363 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.219428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.219503 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.219542 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.219640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.219667 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.322364 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.322419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.322433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.322453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.322466 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.425480 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.425558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.425572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.425595 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.425611 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.528596 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.528664 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.528676 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.528696 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.528709 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.608707 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:23:11.110465825 +0000 UTC Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.631746 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.631814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.631831 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.631857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.631880 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.735076 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.735127 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.735141 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.735165 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.735177 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.839195 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.839325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.839350 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.839379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.839399 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.942488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.942565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.942577 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.942598 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.942616 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.046390 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.046476 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.046498 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.046533 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.046562 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.184490 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.184564 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.184587 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.184615 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.184636 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.263660 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.263722 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.263681 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:06 crc kubenswrapper[4985]: E0128 18:15:06.263792 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.263856 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:06 crc kubenswrapper[4985]: E0128 18:15:06.263879 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:06 crc kubenswrapper[4985]: E0128 18:15:06.264033 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:06 crc kubenswrapper[4985]: E0128 18:15:06.264298 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.286916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.286949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.286958 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.286972 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.286982 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.390037 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.390088 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.390099 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.390117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.390130 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.493031 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.493103 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.493123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.493148 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.493168 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.574810 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.574872 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.574889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.574915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.574933 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.608930 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:50:21.356361773 +0000 UTC Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.608994 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.612503 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.612556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.612590 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.612609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.612624 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.620658 4985 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.652975 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc"] Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.653619 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.655623 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.656154 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.656633 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.657156 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.725525 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c5ff91d-acf0-42d7-877b-c60b68cd5248-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.725594 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5ff91d-acf0-42d7-877b-c60b68cd5248-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.725635 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c5ff91d-acf0-42d7-877b-c60b68cd5248-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.725679 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c5ff91d-acf0-42d7-877b-c60b68cd5248-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.725748 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c5ff91d-acf0-42d7-877b-c60b68cd5248-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.734949 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" podStartSLOduration=86.73491856 podStartE2EDuration="1m26.73491856s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.720132545 +0000 UTC m=+117.546695376" watchObservedRunningTime="2026-01-28 18:15:06.73491856 +0000 UTC m=+117.561481401" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.735201 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podStartSLOduration=86.735193948 podStartE2EDuration="1m26.735193948s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.734651382 +0000 UTC m=+117.561214243" watchObservedRunningTime="2026-01-28 18:15:06.735193948 +0000 UTC m=+117.561756789" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.794478 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-g2g4k" podStartSLOduration=86.794447709 podStartE2EDuration="1m26.794447709s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.772813733 +0000 UTC m=+117.599376604" watchObservedRunningTime="2026-01-28 18:15:06.794447709 +0000 UTC m=+117.621010570" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.824823 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=31.82480241 podStartE2EDuration="31.82480241s" podCreationTimestamp="2026-01-28 18:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.823889944 +0000 UTC m=+117.650452765" watchObservedRunningTime="2026-01-28 18:15:06.82480241 +0000 UTC m=+117.651365231" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.826675 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c5ff91d-acf0-42d7-877b-c60b68cd5248-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.826741 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5ff91d-acf0-42d7-877b-c60b68cd5248-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.826784 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c5ff91d-acf0-42d7-877b-c60b68cd5248-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.826852 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c5ff91d-acf0-42d7-877b-c60b68cd5248-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.826896 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c5ff91d-acf0-42d7-877b-c60b68cd5248-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.827050 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c5ff91d-acf0-42d7-877b-c60b68cd5248-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.827125 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c5ff91d-acf0-42d7-877b-c60b68cd5248-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.828196 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c5ff91d-acf0-42d7-877b-c60b68cd5248-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.838002 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5ff91d-acf0-42d7-877b-c60b68cd5248-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.838318 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=18.838302667 podStartE2EDuration="18.838302667s" podCreationTimestamp="2026-01-28 18:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.837006129 +0000 UTC m=+117.663568950" watchObservedRunningTime="2026-01-28 18:15:06.838302667 +0000 UTC m=+117.664865488" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.846122 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c5ff91d-acf0-42d7-877b-c60b68cd5248-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.853716 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=90.853687689 podStartE2EDuration="1m30.853687689s" podCreationTimestamp="2026-01-28 18:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.852297488 +0000 UTC m=+117.678860329" watchObservedRunningTime="2026-01-28 18:15:06.853687689 +0000 UTC m=+117.680250530" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.881389 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=63.881362872 podStartE2EDuration="1m3.881362872s" podCreationTimestamp="2026-01-28 18:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.869071541 +0000 UTC m=+117.695634372" watchObservedRunningTime="2026-01-28 18:15:06.881362872 +0000 UTC m=+117.707925713" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.955641 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dlz95" podStartSLOduration=86.955613893 podStartE2EDuration="1m26.955613893s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.955056137 +0000 UTC m=+117.781618958" watchObservedRunningTime="2026-01-28 18:15:06.955613893 +0000 UTC m=+117.782176714" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.969634 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.969609244 podStartE2EDuration="1m30.969609244s" podCreationTimestamp="2026-01-28 18:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.968926464 +0000 UTC m=+117.795489295" watchObservedRunningTime="2026-01-28 18:15:06.969609244 +0000 UTC m=+117.796172065" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.972429 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.983387 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-9xm27" podStartSLOduration=86.983364729 podStartE2EDuration="1m26.983364729s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.983356638 +0000 UTC m=+117.809919489" watchObservedRunningTime="2026-01-28 18:15:06.983364729 +0000 UTC m=+117.809927550" Jan 28 18:15:07 crc kubenswrapper[4985]: I0128 18:15:07.001287 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" podStartSLOduration=87.001268714 podStartE2EDuration="1m27.001268714s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:07.001135431 +0000 UTC m=+117.827698272" watchObservedRunningTime="2026-01-28 18:15:07.001268714 +0000 UTC m=+117.827831545" Jan 28 18:15:07 crc kubenswrapper[4985]: I0128 18:15:07.057791 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" event={"ID":"4c5ff91d-acf0-42d7-877b-c60b68cd5248","Type":"ContainerStarted","Data":"73b3b1bacd3a4d22a1b1bbf67172aeb8d6cfc0a5efe9e729c221693ea17bbadb"} Jan 28 18:15:07 crc kubenswrapper[4985]: I0128 18:15:07.264383 4985 scope.go:117] "RemoveContainer" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" Jan 28 18:15:07 crc kubenswrapper[4985]: E0128 18:15:07.264660 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:15:08 crc kubenswrapper[4985]: I0128 18:15:08.062855 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" event={"ID":"4c5ff91d-acf0-42d7-877b-c60b68cd5248","Type":"ContainerStarted","Data":"e828b99afd1d732b6cbe43ee2cfef2620b6af0c16cc64d0449320baebed48dcd"} Jan 28 18:15:08 crc kubenswrapper[4985]: I0128 18:15:08.263509 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:08 crc kubenswrapper[4985]: I0128 18:15:08.263559 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:08 crc kubenswrapper[4985]: I0128 18:15:08.263649 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:08 crc kubenswrapper[4985]: E0128 18:15:08.263697 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:08 crc kubenswrapper[4985]: I0128 18:15:08.263522 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:08 crc kubenswrapper[4985]: E0128 18:15:08.263810 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:08 crc kubenswrapper[4985]: E0128 18:15:08.264092 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:08 crc kubenswrapper[4985]: E0128 18:15:08.264172 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:10 crc kubenswrapper[4985]: I0128 18:15:10.263669 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:10 crc kubenswrapper[4985]: I0128 18:15:10.263755 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:10 crc kubenswrapper[4985]: I0128 18:15:10.263803 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:10 crc kubenswrapper[4985]: I0128 18:15:10.263707 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:10 crc kubenswrapper[4985]: E0128 18:15:10.263914 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:10 crc kubenswrapper[4985]: E0128 18:15:10.264009 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:10 crc kubenswrapper[4985]: E0128 18:15:10.264129 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:10 crc kubenswrapper[4985]: E0128 18:15:10.264245 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:11 crc kubenswrapper[4985]: E0128 18:15:11.267524 4985 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 18:15:11 crc kubenswrapper[4985]: E0128 18:15:11.697057 4985 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:15:12 crc kubenswrapper[4985]: I0128 18:15:12.263493 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:12 crc kubenswrapper[4985]: I0128 18:15:12.263536 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:12 crc kubenswrapper[4985]: I0128 18:15:12.263601 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:12 crc kubenswrapper[4985]: E0128 18:15:12.263618 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:12 crc kubenswrapper[4985]: I0128 18:15:12.263636 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:12 crc kubenswrapper[4985]: E0128 18:15:12.263735 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:12 crc kubenswrapper[4985]: E0128 18:15:12.263829 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:12 crc kubenswrapper[4985]: E0128 18:15:12.263906 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:14 crc kubenswrapper[4985]: I0128 18:15:14.263794 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:14 crc kubenswrapper[4985]: I0128 18:15:14.263842 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:14 crc kubenswrapper[4985]: E0128 18:15:14.264100 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:14 crc kubenswrapper[4985]: I0128 18:15:14.263913 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:14 crc kubenswrapper[4985]: I0128 18:15:14.263862 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:14 crc kubenswrapper[4985]: E0128 18:15:14.264648 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:14 crc kubenswrapper[4985]: E0128 18:15:14.264782 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:14 crc kubenswrapper[4985]: E0128 18:15:14.264944 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:16 crc kubenswrapper[4985]: I0128 18:15:16.263947 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:16 crc kubenswrapper[4985]: I0128 18:15:16.264033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:16 crc kubenswrapper[4985]: I0128 18:15:16.264033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:16 crc kubenswrapper[4985]: I0128 18:15:16.264165 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:16 crc kubenswrapper[4985]: E0128 18:15:16.264181 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:16 crc kubenswrapper[4985]: E0128 18:15:16.264368 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:16 crc kubenswrapper[4985]: E0128 18:15:16.264496 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:16 crc kubenswrapper[4985]: E0128 18:15:16.264676 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:16 crc kubenswrapper[4985]: E0128 18:15:16.698982 4985 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.098935 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/1.log" Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.100080 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/0.log" Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.100159 4985 generic.go:334] "Generic (PLEG): container finished" podID="14fdd73a-b8dd-42da-88b4-2ccb314c4f7a" containerID="72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c" exitCode=1 Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.100203 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerDied","Data":"72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c"} Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.100288 4985 scope.go:117] "RemoveContainer" containerID="9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb" Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.100957 4985 scope.go:117] "RemoveContainer" containerID="72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c" Jan 28 18:15:17 crc kubenswrapper[4985]: E0128 18:15:17.101488 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-g2g4k_openshift-multus(14fdd73a-b8dd-42da-88b4-2ccb314c4f7a)\"" pod="openshift-multus/multus-g2g4k" podUID="14fdd73a-b8dd-42da-88b4-2ccb314c4f7a" Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.129886 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" podStartSLOduration=97.129859821 podStartE2EDuration="1m37.129859821s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:08.081719196 +0000 UTC m=+118.908282047" watchObservedRunningTime="2026-01-28 18:15:17.129859821 +0000 UTC m=+127.956422682" Jan 28 18:15:18 crc kubenswrapper[4985]: I0128 18:15:18.106658 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/1.log" Jan 28 18:15:18 crc kubenswrapper[4985]: I0128 18:15:18.263429 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:18 crc kubenswrapper[4985]: I0128 18:15:18.263443 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:18 crc kubenswrapper[4985]: I0128 18:15:18.263591 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:18 crc kubenswrapper[4985]: I0128 18:15:18.263554 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:18 crc kubenswrapper[4985]: E0128 18:15:18.263791 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:18 crc kubenswrapper[4985]: E0128 18:15:18.264065 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:18 crc kubenswrapper[4985]: E0128 18:15:18.264179 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:18 crc kubenswrapper[4985]: E0128 18:15:18.264313 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:20 crc kubenswrapper[4985]: I0128 18:15:20.263580 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:20 crc kubenswrapper[4985]: I0128 18:15:20.263625 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:20 crc kubenswrapper[4985]: I0128 18:15:20.263689 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:20 crc kubenswrapper[4985]: E0128 18:15:20.263789 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:20 crc kubenswrapper[4985]: I0128 18:15:20.263821 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:20 crc kubenswrapper[4985]: E0128 18:15:20.263966 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:20 crc kubenswrapper[4985]: E0128 18:15:20.264067 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:20 crc kubenswrapper[4985]: E0128 18:15:20.264139 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:21 crc kubenswrapper[4985]: E0128 18:15:21.699936 4985 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:15:22 crc kubenswrapper[4985]: I0128 18:15:22.263423 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:22 crc kubenswrapper[4985]: E0128 18:15:22.263666 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:22 crc kubenswrapper[4985]: I0128 18:15:22.264439 4985 scope.go:117] "RemoveContainer" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" Jan 28 18:15:22 crc kubenswrapper[4985]: I0128 18:15:22.263415 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:22 crc kubenswrapper[4985]: I0128 18:15:22.264612 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:22 crc kubenswrapper[4985]: E0128 18:15:22.264814 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:22 crc kubenswrapper[4985]: I0128 18:15:22.264901 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:22 crc kubenswrapper[4985]: E0128 18:15:22.264973 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:22 crc kubenswrapper[4985]: E0128 18:15:22.265179 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:23 crc kubenswrapper[4985]: I0128 18:15:23.037107 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hrd6k"] Jan 28 18:15:23 crc kubenswrapper[4985]: I0128 18:15:23.132033 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/3.log" Jan 28 18:15:23 crc kubenswrapper[4985]: I0128 18:15:23.135693 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154"} Jan 28 18:15:23 crc kubenswrapper[4985]: I0128 18:15:23.135732 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:23 crc kubenswrapper[4985]: E0128 18:15:23.136016 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:23 crc kubenswrapper[4985]: I0128 18:15:23.171307 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podStartSLOduration=103.171282099 podStartE2EDuration="1m43.171282099s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:23.171174866 +0000 UTC m=+133.997737727" watchObservedRunningTime="2026-01-28 18:15:23.171282099 +0000 UTC m=+133.997844930" Jan 28 18:15:24 crc kubenswrapper[4985]: I0128 18:15:24.263420 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:24 crc kubenswrapper[4985]: I0128 18:15:24.263451 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:24 crc kubenswrapper[4985]: I0128 18:15:24.263521 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:24 crc kubenswrapper[4985]: E0128 18:15:24.263669 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:24 crc kubenswrapper[4985]: E0128 18:15:24.263766 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:24 crc kubenswrapper[4985]: E0128 18:15:24.263846 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:25 crc kubenswrapper[4985]: I0128 18:15:25.263826 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:25 crc kubenswrapper[4985]: E0128 18:15:25.264078 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:26 crc kubenswrapper[4985]: I0128 18:15:26.263672 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:26 crc kubenswrapper[4985]: I0128 18:15:26.263706 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:26 crc kubenswrapper[4985]: I0128 18:15:26.263706 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:26 crc kubenswrapper[4985]: E0128 18:15:26.263835 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:26 crc kubenswrapper[4985]: E0128 18:15:26.263933 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:26 crc kubenswrapper[4985]: E0128 18:15:26.263979 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:26 crc kubenswrapper[4985]: E0128 18:15:26.702027 4985 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:15:27 crc kubenswrapper[4985]: I0128 18:15:27.263834 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:27 crc kubenswrapper[4985]: E0128 18:15:27.264157 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:28 crc kubenswrapper[4985]: I0128 18:15:28.263867 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:28 crc kubenswrapper[4985]: I0128 18:15:28.263902 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:28 crc kubenswrapper[4985]: E0128 18:15:28.264211 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:28 crc kubenswrapper[4985]: I0128 18:15:28.264305 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:28 crc kubenswrapper[4985]: E0128 18:15:28.264420 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:28 crc kubenswrapper[4985]: E0128 18:15:28.264620 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:28 crc kubenswrapper[4985]: I0128 18:15:28.265365 4985 scope.go:117] "RemoveContainer" containerID="72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c" Jan 28 18:15:29 crc kubenswrapper[4985]: I0128 18:15:29.159661 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/1.log" Jan 28 18:15:29 crc kubenswrapper[4985]: I0128 18:15:29.160101 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerStarted","Data":"95eb50bd0d67db39cc80a75d4b4c5fb2e77de46dc2c84556d599c22d07b3f535"} Jan 28 18:15:29 crc kubenswrapper[4985]: I0128 18:15:29.263367 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:29 crc kubenswrapper[4985]: E0128 18:15:29.263551 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:30 crc kubenswrapper[4985]: I0128 18:15:30.263910 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:30 crc kubenswrapper[4985]: I0128 18:15:30.263978 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:30 crc kubenswrapper[4985]: I0128 18:15:30.264106 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:30 crc kubenswrapper[4985]: E0128 18:15:30.264204 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:30 crc kubenswrapper[4985]: E0128 18:15:30.264365 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:30 crc kubenswrapper[4985]: E0128 18:15:30.264608 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:31 crc kubenswrapper[4985]: I0128 18:15:31.263122 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:31 crc kubenswrapper[4985]: E0128 18:15:31.265155 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.263488 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.263544 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.263580 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.266583 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.266769 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.268355 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.268398 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 18:15:33 crc kubenswrapper[4985]: I0128 18:15:33.263635 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:33 crc kubenswrapper[4985]: I0128 18:15:33.267164 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 18:15:33 crc kubenswrapper[4985]: I0128 18:15:33.267241 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.383629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.440370 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.441334 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.443505 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52cvd"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.444357 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.447483 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.447855 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.448076 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.448363 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.450000 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.450155 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.451125 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.452386 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.453201 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.454211 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hpz9q"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.455266 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.456594 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.456724 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.456942 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.457270 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.457891 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.457960 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.458125 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.458202 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.458401 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.458446 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.458666 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.458881 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.459101 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.463429 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hjjf7"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.464306 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.465153 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.465681 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.466305 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.466800 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.466980 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.467077 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.467821 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.468550 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pcb4d"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.468981 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.469930 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fdfqq"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.472030 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-b5t5k"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.472580 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.473167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.475791 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.476485 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.477620 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.477870 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.477956 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2wxf2"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.477962 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.478001 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.478064 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.478156 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.478179 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.478736 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.480826 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.481743 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.482739 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483149 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483236 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483396 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483449 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483576 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483608 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483417 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483750 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.484192 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.485755 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.486211 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.486439 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.486612 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.486842 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.488186 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.492210 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.494225 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.500446 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4k6qp"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.504029 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.529593 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.531387 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.531675 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.532660 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bmvks"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.533145 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.533442 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.533593 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.533800 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.533995 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hk2lj"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.534737 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.534905 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.534943 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.535121 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.535662 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.535771 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.536568 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.537885 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538180 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538219 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538336 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538416 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538465 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538351 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538635 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538694 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538393 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538850 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538648 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.539030 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.539214 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.539449 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538583 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540101 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538806 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538982 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.539808 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540292 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540321 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540005 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.539879 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ef78af-dc11-4231-9693-eb088718d103-serving-cert\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540033 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540600 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ebf5f82e-2a14-49d9-b670-59ed73e71203-node-pullsecrets\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540639 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-etcd-serving-ca\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540672 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/715ad1e8-6659-4a18-a007-ad31ffa7044e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540729 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-client-ca\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540771 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a8b060f-1416-4676-af77-45c0b411ff59-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540829 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa6948a7-6763-4c03-b6f9-ecfb38a8a064-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-77hkl\" (UID: \"fa6948a7-6763-4c03-b6f9-ecfb38a8a064\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540871 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540911 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q29sg\" (UniqueName: \"kubernetes.io/projected/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-kube-api-access-q29sg\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540826 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541036 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-oauth-config\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541086 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541119 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/715ad1e8-6659-4a18-a007-ad31ffa7044e-serving-cert\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541181 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfnlm\" (UniqueName: \"kubernetes.io/projected/81ef78af-dc11-4231-9693-eb088718d103-kube-api-access-rfnlm\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541422 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c731b198-314f-46a9-ad13-a4cc6c7bab94-audit-dir\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541647 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541716 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541820 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218b57d8-c3a3-4a33-a3ef-6701cf557911-config\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541912 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d556c9-6c8e-45d3-bec8-303081e8c4e1-serving-cert\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541993 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5691988c-c881-437e-aa60-317e424b3170-trusted-ca\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542099 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-audit\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542177 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a8b060f-1416-4676-af77-45c0b411ff59-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542298 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/010ced82-1614-4ade-958b-d12ea6cda1b9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542854 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77rrz\" (UniqueName: \"kubernetes.io/projected/715ad1e8-6659-4a18-a007-ad31ffa7044e-kube-api-access-77rrz\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542951 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-serving-cert\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543026 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-encryption-config\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543097 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543352 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543476 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-client-ca\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543550 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-etcd-client\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543647 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-serving-cert\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543721 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542144 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542310 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543882 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njzzn\" (UniqueName: \"kubernetes.io/projected/5691988c-c881-437e-aa60-317e424b3170-kube-api-access-njzzn\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544021 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-image-import-ca\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544111 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544180 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544526 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-oauth-serving-cert\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544628 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544665 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-config\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544754 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-etcd-client\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545041 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-auth-proxy-config\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545126 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be08d23e-d6c9-4b42-904b-c36b05dfc316-serving-cert\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545194 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010ced82-1614-4ade-958b-d12ea6cda1b9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545286 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545331 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-config\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545393 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545457 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545428 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-policies\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545569 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-dir\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545608 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/218b57d8-c3a3-4a33-a3ef-6701cf557911-images\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545645 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-service-ca\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545681 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5691988c-c881-437e-aa60-317e424b3170-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545718 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545612 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545689 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545785 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545788 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mzzm\" (UniqueName: \"kubernetes.io/projected/be08d23e-d6c9-4b42-904b-c36b05dfc316-kube-api-access-7mzzm\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545814 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6t9q\" (UniqueName: \"kubernetes.io/projected/44d556c9-6c8e-45d3-bec8-303081e8c4e1-kube-api-access-d6t9q\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545837 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5bb6\" (UniqueName: \"kubernetes.io/projected/c731b198-314f-46a9-ad13-a4cc6c7bab94-kube-api-access-f5bb6\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545855 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545721 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545894 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545917 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545886 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545950 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-audit-policies\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546034 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5691988c-c881-437e-aa60-317e424b3170-metrics-tls\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546078 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dbkv\" (UniqueName: \"kubernetes.io/projected/c7f9c411-3899-4824-a051-b18ad42a950e-kube-api-access-2dbkv\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546118 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxxvl\" (UniqueName: \"kubernetes.io/projected/010ced82-1614-4ade-958b-d12ea6cda1b9-kube-api-access-vxxvl\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546168 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546222 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-service-ca-bundle\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546293 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebf5f82e-2a14-49d9-b670-59ed73e71203-audit-dir\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546333 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcmdc\" (UniqueName: \"kubernetes.io/projected/d061f6d6-1983-405d-93af-3e492ff49f7c-kube-api-access-jcmdc\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546369 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-config\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546397 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-config\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546431 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffjk\" (UniqueName: \"kubernetes.io/projected/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-kube-api-access-6ffjk\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546466 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546493 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26d6j\" (UniqueName: \"kubernetes.io/projected/fa6948a7-6763-4c03-b6f9-ecfb38a8a064-kube-api-access-26d6j\") pod \"cluster-samples-operator-665b6dd947-77hkl\" (UID: \"fa6948a7-6763-4c03-b6f9-ecfb38a8a064\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546552 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5nw2\" (UniqueName: \"kubernetes.io/projected/218b57d8-c3a3-4a33-a3ef-6701cf557911-kube-api-access-h5nw2\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546583 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-machine-approver-tls\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546611 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-console-config\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546670 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/218b57d8-c3a3-4a33-a3ef-6701cf557911-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546699 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-config\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546728 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-encryption-config\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546790 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a8b060f-1416-4676-af77-45c0b411ff59-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546829 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kjx6\" (UniqueName: \"kubernetes.io/projected/ebf5f82e-2a14-49d9-b670-59ed73e71203-kube-api-access-4kjx6\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546864 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rpw6\" (UniqueName: \"kubernetes.io/projected/25061ce4-ca31-4da7-ad36-c6535e1d2028-kube-api-access-8rpw6\") pod \"downloads-7954f5f757-hpz9q\" (UID: \"25061ce4-ca31-4da7-ad36-c6535e1d2028\") " pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546899 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-serving-cert\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546942 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-trusted-ca-bundle\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546975 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.547343 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.547543 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.547695 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.547957 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.548113 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.548304 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.549515 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.551593 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.551888 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.555151 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540928 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.557148 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.557321 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.557767 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.561133 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.561353 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.563421 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.564209 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.564846 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.565711 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.580024 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.594115 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j6799"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.594422 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.607789 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.608753 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.608865 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.610340 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hpz9q"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.611093 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.615017 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.617444 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.618497 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52cvd"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.619688 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.621874 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.622777 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.624523 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.625119 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.625928 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.626871 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.628581 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.629396 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fdfqq"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.631467 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hk2lj"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.632433 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pcb4d"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.633613 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.634512 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.639285 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.640374 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.642318 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.643067 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.644890 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.645085 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9l594"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.645884 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.646649 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4k6qp"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.647891 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-config\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.647934 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.647962 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-policies\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.647984 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-dir\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648002 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648027 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648047 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db632812-bc0d-41f2-9c01-a19d40eb69be-config\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648066 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/218b57d8-c3a3-4a33-a3ef-6701cf557911-images\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648083 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-service-ca\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648101 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5691988c-c881-437e-aa60-317e424b3170-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648119 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mzzm\" (UniqueName: \"kubernetes.io/projected/be08d23e-d6c9-4b42-904b-c36b05dfc316-kube-api-access-7mzzm\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648138 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6t9q\" (UniqueName: \"kubernetes.io/projected/44d556c9-6c8e-45d3-bec8-303081e8c4e1-kube-api-access-d6t9q\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648154 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5bb6\" (UniqueName: \"kubernetes.io/projected/c731b198-314f-46a9-ad13-a4cc6c7bab94-kube-api-access-f5bb6\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648170 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648186 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648205 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648222 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648241 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-audit-policies\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648863 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-dir\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.649675 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b5wzm"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650236 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650415 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5691988c-c881-437e-aa60-317e424b3170-metrics-tls\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650506 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650579 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650642 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4skx\" (UniqueName: \"kubernetes.io/projected/9675b92d-1a0c-460b-bbad-cd6abab61f2f-kube-api-access-v4skx\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650867 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650983 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650984 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dbkv\" (UniqueName: \"kubernetes.io/projected/c7f9c411-3899-4824-a051-b18ad42a950e-kube-api-access-2dbkv\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651044 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxxvl\" (UniqueName: \"kubernetes.io/projected/010ced82-1614-4ade-958b-d12ea6cda1b9-kube-api-access-vxxvl\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650805 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-policies\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651069 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651055 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651108 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-service-ca-bundle\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651131 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-config\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651169 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebf5f82e-2a14-49d9-b670-59ed73e71203-audit-dir\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651193 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcmdc\" (UniqueName: \"kubernetes.io/projected/d061f6d6-1983-405d-93af-3e492ff49f7c-kube-api-access-jcmdc\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651219 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-config\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651240 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-config\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651290 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-service-ca\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ffjk\" (UniqueName: \"kubernetes.io/projected/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-kube-api-access-6ffjk\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651336 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651355 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5nw2\" (UniqueName: \"kubernetes.io/projected/218b57d8-c3a3-4a33-a3ef-6701cf557911-kube-api-access-h5nw2\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651377 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-machine-approver-tls\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651383 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebf5f82e-2a14-49d9-b670-59ed73e71203-audit-dir\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651394 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-console-config\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653097 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653124 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26d6j\" (UniqueName: \"kubernetes.io/projected/fa6948a7-6763-4c03-b6f9-ecfb38a8a064-kube-api-access-26d6j\") pod \"cluster-samples-operator-665b6dd947-77hkl\" (UID: \"fa6948a7-6763-4c03-b6f9-ecfb38a8a064\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651723 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-service-ca-bundle\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653152 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db632812-bc0d-41f2-9c01-a19d40eb69be-trusted-ca\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651874 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653178 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/218b57d8-c3a3-4a33-a3ef-6701cf557911-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653199 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-config\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653217 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-encryption-config\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653235 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a8b060f-1416-4676-af77-45c0b411ff59-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653271 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9675b92d-1a0c-460b-bbad-cd6abab61f2f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653296 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kjx6\" (UniqueName: \"kubernetes.io/projected/ebf5f82e-2a14-49d9-b670-59ed73e71203-kube-api-access-4kjx6\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653364 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-console-config\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651685 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-audit-policies\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653012 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-service-ca\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653459 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6ndmg"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.654104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-config\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.654353 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651162 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/218b57d8-c3a3-4a33-a3ef-6701cf557911-images\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.655084 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-config\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.652985 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-config\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.655451 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-fn9d5"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.655458 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-trusted-ca-bundle\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.655785 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rpw6\" (UniqueName: \"kubernetes.io/projected/25061ce4-ca31-4da7-ad36-c6535e1d2028-kube-api-access-8rpw6\") pod \"downloads-7954f5f757-hpz9q\" (UID: \"25061ce4-ca31-4da7-ad36-c6535e1d2028\") " pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.655903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-serving-cert\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656018 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656141 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ef78af-dc11-4231-9693-eb088718d103-serving-cert\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656294 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ebf5f82e-2a14-49d9-b670-59ed73e71203-node-pullsecrets\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656413 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-etcd-serving-ca\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656517 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/715ad1e8-6659-4a18-a007-ad31ffa7044e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656634 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-client-ca\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656746 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a8b060f-1416-4676-af77-45c0b411ff59-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656952 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn6fx\" (UniqueName: \"kubernetes.io/projected/db632812-bc0d-41f2-9c01-a19d40eb69be-kube-api-access-dn6fx\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.657133 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa6948a7-6763-4c03-b6f9-ecfb38a8a064-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-77hkl\" (UID: \"fa6948a7-6763-4c03-b6f9-ecfb38a8a064\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.657239 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.658294 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q29sg\" (UniqueName: \"kubernetes.io/projected/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-kube-api-access-q29sg\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662509 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-oauth-config\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662516 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ef78af-dc11-4231-9693-eb088718d103-serving-cert\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662543 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662576 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfnlm\" (UniqueName: \"kubernetes.io/projected/81ef78af-dc11-4231-9693-eb088718d103-kube-api-access-rfnlm\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.659179 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662603 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c731b198-314f-46a9-ad13-a4cc6c7bab94-audit-dir\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.660190 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-client-ca\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662627 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.660449 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ebf5f82e-2a14-49d9-b670-59ed73e71203-node-pullsecrets\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662398 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-encryption-config\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.660415 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-serving-cert\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662709 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/715ad1e8-6659-4a18-a007-ad31ffa7044e-serving-cert\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.658220 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/218b57d8-c3a3-4a33-a3ef-6701cf557911-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662773 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662811 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzrqc\" (UniqueName: \"kubernetes.io/projected/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-kube-api-access-fzrqc\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.659564 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/715ad1e8-6659-4a18-a007-ad31ffa7044e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662874 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218b57d8-c3a3-4a33-a3ef-6701cf557911-config\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662960 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d556c9-6c8e-45d3-bec8-303081e8c4e1-serving-cert\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.660383 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-trusted-ca-bundle\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.663054 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c731b198-314f-46a9-ad13-a4cc6c7bab94-audit-dir\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.658645 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.663931 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218b57d8-c3a3-4a33-a3ef-6701cf557911-config\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.664424 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qnrsp"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.665333 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.665541 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.658230 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.665927 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666596 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmnqc\" (UniqueName: \"kubernetes.io/projected/bf0cd343-6643-4463-bb9b-6e291a601901-kube-api-access-mmnqc\") pod \"dns-operator-744455d44c-bmvks\" (UID: \"bf0cd343-6643-4463-bb9b-6e291a601901\") " pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666706 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-audit\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666769 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a8b060f-1416-4676-af77-45c0b411ff59-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666791 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5691988c-c881-437e-aa60-317e424b3170-trusted-ca\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666858 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-config\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666893 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wl24\" (UniqueName: \"kubernetes.io/projected/a1f443aa-50c0-4865-b6a3-a07d13b71e73-kube-api-access-9wl24\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666914 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666960 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-serving-cert\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667050 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-encryption-config\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667078 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/010ced82-1614-4ade-958b-d12ea6cda1b9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667109 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77rrz\" (UniqueName: \"kubernetes.io/projected/715ad1e8-6659-4a18-a007-ad31ffa7044e-kube-api-access-77rrz\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667293 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667376 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-client-ca\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667401 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-etcd-client\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667426 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-config\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667450 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f443aa-50c0-4865-b6a3-a07d13b71e73-serving-cert\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667578 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a8b060f-1416-4676-af77-45c0b411ff59-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667467 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-client\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668460 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-serving-cert\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668504 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9675b92d-1a0c-460b-bbad-cd6abab61f2f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668541 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668576 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njzzn\" (UniqueName: \"kubernetes.io/projected/5691988c-c881-437e-aa60-317e424b3170-kube-api-access-njzzn\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668600 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf0cd343-6643-4463-bb9b-6e291a601901-metrics-tls\") pod \"dns-operator-744455d44c-bmvks\" (UID: \"bf0cd343-6643-4463-bb9b-6e291a601901\") " pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668637 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-image-import-ca\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668660 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.669405 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.669452 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-audit\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.670018 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.670386 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/715ad1e8-6659-4a18-a007-ad31ffa7044e-serving-cert\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.670771 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.670874 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5691988c-c881-437e-aa60-317e424b3170-trusted-ca\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.671104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-client-ca\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.671163 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.671695 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672120 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672574 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db632812-bc0d-41f2-9c01-a19d40eb69be-serving-cert\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672675 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-ca\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672739 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-oauth-serving-cert\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672768 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672794 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672825 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjg8s\" (UniqueName: \"kubernetes.io/projected/d3e3ff22-4547-453f-bd6a-bf8d4098f3a3-kube-api-access-jjg8s\") pod \"migrator-59844c95c7-k5vgf\" (UID: \"d3e3ff22-4547-453f-bd6a-bf8d4098f3a3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672952 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-auth-proxy-config\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672985 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be08d23e-d6c9-4b42-904b-c36b05dfc316-serving-cert\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.673012 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-config\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.673034 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-etcd-client\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.673059 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010ced82-1614-4ade-958b-d12ea6cda1b9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.673493 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.673775 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.674654 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010ced82-1614-4ade-958b-d12ea6cda1b9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.677982 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fzzsl"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.678568 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.679064 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.679120 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.679552 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.679988 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-b5t5k"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.680174 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.680579 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-serving-cert\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.679198 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.680725 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-etcd-serving-ca\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.681271 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-image-import-ca\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.681807 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-machine-approver-tls\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.682590 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-oauth-config\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.683568 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.685064 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-config\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.688959 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-encryption-config\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.689420 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-auth-proxy-config\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.689573 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-etcd-client\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.689652 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/010ced82-1614-4ade-958b-d12ea6cda1b9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.689885 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.689886 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5691988c-c881-437e-aa60-317e424b3170-metrics-tls\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.690005 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.690370 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.690715 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-etcd-client\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.691140 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d556c9-6c8e-45d3-bec8-303081e8c4e1-serving-cert\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.691364 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-serving-cert\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.692229 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a8b060f-1416-4676-af77-45c0b411ff59-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.692521 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-oauth-serving-cert\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.694423 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.694887 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.698514 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.700902 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa6948a7-6763-4c03-b6f9-ecfb38a8a064-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-77hkl\" (UID: \"fa6948a7-6763-4c03-b6f9-ecfb38a8a064\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.701527 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hjjf7"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.703911 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.713182 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.713425 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be08d23e-d6c9-4b42-904b-c36b05dfc316-serving-cert\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.716219 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bmvks"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.719812 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b5wzm"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.719854 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.720938 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9l594"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.722513 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.724040 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j6799"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.724177 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.724659 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.725965 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.727228 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2wxf2"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.729031 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6ndmg"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.730470 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.731663 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.733099 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.734349 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.735689 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-g5knd"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.737076 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.737123 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.738412 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.739704 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.741039 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.742286 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fzzsl"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.743586 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.744170 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.744858 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fn9d5"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.746016 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-g5knd"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.747600 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2lzzr"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.748132 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.749390 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5zj27"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.750382 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.750914 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5zj27"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.764844 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.774910 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn6fx\" (UniqueName: \"kubernetes.io/projected/db632812-bc0d-41f2-9c01-a19d40eb69be-kube-api-access-dn6fx\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.774972 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.774994 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzrqc\" (UniqueName: \"kubernetes.io/projected/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-kube-api-access-fzrqc\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775015 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmnqc\" (UniqueName: \"kubernetes.io/projected/bf0cd343-6643-4463-bb9b-6e291a601901-kube-api-access-mmnqc\") pod \"dns-operator-744455d44c-bmvks\" (UID: \"bf0cd343-6643-4463-bb9b-6e291a601901\") " pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775056 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-config\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775078 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wl24\" (UniqueName: \"kubernetes.io/projected/a1f443aa-50c0-4865-b6a3-a07d13b71e73-kube-api-access-9wl24\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775116 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775141 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-config\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775159 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f443aa-50c0-4865-b6a3-a07d13b71e73-serving-cert\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775174 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-client\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775190 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9675b92d-1a0c-460b-bbad-cd6abab61f2f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf0cd343-6643-4463-bb9b-6e291a601901-metrics-tls\") pod \"dns-operator-744455d44c-bmvks\" (UID: \"bf0cd343-6643-4463-bb9b-6e291a601901\") " pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775232 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db632812-bc0d-41f2-9c01-a19d40eb69be-serving-cert\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775262 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-ca\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775281 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjg8s\" (UniqueName: \"kubernetes.io/projected/d3e3ff22-4547-453f-bd6a-bf8d4098f3a3-kube-api-access-jjg8s\") pod \"migrator-59844c95c7-k5vgf\" (UID: \"d3e3ff22-4547-453f-bd6a-bf8d4098f3a3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775306 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775322 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db632812-bc0d-41f2-9c01-a19d40eb69be-config\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775371 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4skx\" (UniqueName: \"kubernetes.io/projected/9675b92d-1a0c-460b-bbad-cd6abab61f2f-kube-api-access-v4skx\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775388 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775423 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-service-ca\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775456 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db632812-bc0d-41f2-9c01-a19d40eb69be-trusted-ca\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775479 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9675b92d-1a0c-460b-bbad-cd6abab61f2f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.776195 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9675b92d-1a0c-460b-bbad-cd6abab61f2f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.778901 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9675b92d-1a0c-460b-bbad-cd6abab61f2f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.779293 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf0cd343-6643-4463-bb9b-6e291a601901-metrics-tls\") pod \"dns-operator-744455d44c-bmvks\" (UID: \"bf0cd343-6643-4463-bb9b-6e291a601901\") " pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.784519 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.804739 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.824140 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.828900 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f443aa-50c0-4865-b6a3-a07d13b71e73-serving-cert\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.844805 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.848861 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-client\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.864169 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.884948 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.885856 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-config\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.904856 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.906701 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-ca\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.924217 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.926583 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-service-ca\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.964317 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.984563 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.004140 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.009519 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.023845 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.026270 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-config\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.044355 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.064915 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.085036 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.105184 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.124395 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.127504 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db632812-bc0d-41f2-9c01-a19d40eb69be-config\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.144409 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.165211 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.173044 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db632812-bc0d-41f2-9c01-a19d40eb69be-serving-cert\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.205928 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.207056 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.219099 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db632812-bc0d-41f2-9c01-a19d40eb69be-trusted-ca\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.225189 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.244060 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.265029 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.284100 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.304672 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.324862 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.344644 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.347300 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.364104 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.384446 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.391325 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.425395 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.445138 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.464757 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.484546 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.505770 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.524881 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.544528 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.564612 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.585722 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.605219 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.624997 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.662452 4985 request.go:700] Waited for 1.013031228s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.673196 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5bb6\" (UniqueName: \"kubernetes.io/projected/c731b198-314f-46a9-ad13-a4cc6c7bab94-kube-api-access-f5bb6\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.694826 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5691988c-c881-437e-aa60-317e424b3170-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.716501 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6t9q\" (UniqueName: \"kubernetes.io/projected/44d556c9-6c8e-45d3-bec8-303081e8c4e1-kube-api-access-d6t9q\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.734480 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mzzm\" (UniqueName: \"kubernetes.io/projected/be08d23e-d6c9-4b42-904b-c36b05dfc316-kube-api-access-7mzzm\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.745499 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.749047 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.765201 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.795160 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.804597 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.823739 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.844048 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.848567 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.895014 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxxvl\" (UniqueName: \"kubernetes.io/projected/010ced82-1614-4ade-958b-d12ea6cda1b9-kube-api-access-vxxvl\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.914044 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcmdc\" (UniqueName: \"kubernetes.io/projected/d061f6d6-1983-405d-93af-3e492ff49f7c-kube-api-access-jcmdc\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.924189 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.934051 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ffjk\" (UniqueName: \"kubernetes.io/projected/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-kube-api-access-6ffjk\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.953297 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.954310 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5nw2\" (UniqueName: \"kubernetes.io/projected/218b57d8-c3a3-4a33-a3ef-6701cf557911-kube-api-access-h5nw2\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.962669 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.965304 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.974654 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.982237 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dbkv\" (UniqueName: \"kubernetes.io/projected/c7f9c411-3899-4824-a051-b18ad42a950e-kube-api-access-2dbkv\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.985020 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.005903 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.040804 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a8b060f-1416-4676-af77-45c0b411ff59-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.046703 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.064928 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.103039 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26d6j\" (UniqueName: \"kubernetes.io/projected/fa6948a7-6763-4c03-b6f9-ecfb38a8a064-kube-api-access-26d6j\") pod \"cluster-samples-operator-665b6dd947-77hkl\" (UID: \"fa6948a7-6763-4c03-b6f9-ecfb38a8a064\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.124843 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.125923 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kjx6\" (UniqueName: \"kubernetes.io/projected/ebf5f82e-2a14-49d9-b670-59ed73e71203-kube-api-access-4kjx6\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.143113 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.146034 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rpw6\" (UniqueName: \"kubernetes.io/projected/25061ce4-ca31-4da7-ad36-c6535e1d2028-kube-api-access-8rpw6\") pod \"downloads-7954f5f757-hpz9q\" (UID: \"25061ce4-ca31-4da7-ad36-c6535e1d2028\") " pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.162411 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pcb4d"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.168766 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q29sg\" (UniqueName: \"kubernetes.io/projected/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-kube-api-access-q29sg\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.179801 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfnlm\" (UniqueName: \"kubernetes.io/projected/81ef78af-dc11-4231-9693-eb088718d103-kube-api-access-rfnlm\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.185624 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.186052 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe08d23e_d6c9_4b42_904b_c36b05dfc316.slice/crio-6b7ea25547e4a1f736567d0db68e73078c5436079bd724f4596ad496d44816d1 WatchSource:0}: Error finding container 6b7ea25547e4a1f736567d0db68e73078c5436079bd724f4596ad496d44816d1: Status 404 returned error can't find the container with id 6b7ea25547e4a1f736567d0db68e73078c5436079bd724f4596ad496d44816d1 Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.199984 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fdfqq"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.203893 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.207663 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" event={"ID":"be08d23e-d6c9-4b42-904b-c36b05dfc316","Type":"ContainerStarted","Data":"6b7ea25547e4a1f736567d0db68e73078c5436079bd724f4596ad496d44816d1"} Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.215144 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd061f6d6_1983_405d_93af_3e492ff49f7c.slice/crio-92eb3ea915f09fd028998d05f1f049bc1e5781547f5807090433223897100c78 WatchSource:0}: Error finding container 92eb3ea915f09fd028998d05f1f049bc1e5781547f5807090433223897100c78: Status 404 returned error can't find the container with id 92eb3ea915f09fd028998d05f1f049bc1e5781547f5807090433223897100c78 Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.216149 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.225540 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.234417 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.242761 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.264245 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77rrz\" (UniqueName: \"kubernetes.io/projected/715ad1e8-6659-4a18-a007-ad31ffa7044e-kube-api-access-77rrz\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.279738 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.314287 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.317062 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.317130 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.320060 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njzzn\" (UniqueName: \"kubernetes.io/projected/5691988c-c881-437e-aa60-317e424b3170-kube-api-access-njzzn\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.323705 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.347592 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.356565 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.363758 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.366973 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.383861 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.405387 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.416460 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.424787 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.440175 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.449755 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.460046 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hjjf7"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.465541 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.483992 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.488917 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.494409 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod218b57d8_c3a3_4a33_a3ef_6701cf557911.slice/crio-9b3f84cabb73fc20ad9534b981fb6e0a0313d0785c99dd0d15c0f9cdc6e4debe WatchSource:0}: Error finding container 9b3f84cabb73fc20ad9534b981fb6e0a0313d0785c99dd0d15c0f9cdc6e4debe: Status 404 returned error can't find the container with id 9b3f84cabb73fc20ad9534b981fb6e0a0313d0785c99dd0d15c0f9cdc6e4debe Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.499121 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.509846 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.524151 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.525306 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-b5t5k"] Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.534088 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc731b198_314f_46a9_ad13_a4cc6c7bab94.slice/crio-7799c0504e8d1fffa9f0bc7d67e2c326156afaed4cf1d61765ba9e47c7794587 WatchSource:0}: Error finding container 7799c0504e8d1fffa9f0bc7d67e2c326156afaed4cf1d61765ba9e47c7794587: Status 404 returned error can't find the container with id 7799c0504e8d1fffa9f0bc7d67e2c326156afaed4cf1d61765ba9e47c7794587 Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.544886 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.545270 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.563166 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2wxf2"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.564058 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.584409 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.604609 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.610095 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52cvd"] Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.622657 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebf5f82e_2a14_49d9_b670_59ed73e71203.slice/crio-91cfdcde5ecb33c60f3342cf5501d1b216c7e5139e2f48c5721944a5c98e3ec2 WatchSource:0}: Error finding container 91cfdcde5ecb33c60f3342cf5501d1b216c7e5139e2f48c5721944a5c98e3ec2: Status 404 returned error can't find the container with id 91cfdcde5ecb33c60f3342cf5501d1b216c7e5139e2f48c5721944a5c98e3ec2 Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.622934 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81ef78af_dc11_4231_9693_eb088718d103.slice/crio-6aa4b8f2068d7c22817241bf474ef76faf5c50ef5705a0334899bfa519f7cac2 WatchSource:0}: Error finding container 6aa4b8f2068d7c22817241bf474ef76faf5c50ef5705a0334899bfa519f7cac2: Status 404 returned error can't find the container with id 6aa4b8f2068d7c22817241bf474ef76faf5c50ef5705a0334899bfa519f7cac2 Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.623930 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.630746 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.643624 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.662535 4985 request.go:700] Waited for 1.925026435s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.664287 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.686234 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.706936 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.719564 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.723726 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.746575 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.752662 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hpz9q"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.764536 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.785533 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.804587 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.830615 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25061ce4_ca31_4da7_ad36_c6535e1d2028.slice/crio-d3f3fdbd322417bb30c50dd78af3aba0532e0b870081cb8ae4572d5015d144e6 WatchSource:0}: Error finding container d3f3fdbd322417bb30c50dd78af3aba0532e0b870081cb8ae4572d5015d144e6: Status 404 returned error can't find the container with id d3f3fdbd322417bb30c50dd78af3aba0532e0b870081cb8ae4572d5015d144e6 Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.831418 4985 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.831595 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.843235 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.862192 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.881103 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn6fx\" (UniqueName: \"kubernetes.io/projected/db632812-bc0d-41f2-9c01-a19d40eb69be-kube-api-access-dn6fx\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.918008 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzrqc\" (UniqueName: \"kubernetes.io/projected/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-kube-api-access-fzrqc\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.923331 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.925359 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmnqc\" (UniqueName: \"kubernetes.io/projected/bf0cd343-6643-4463-bb9b-6e291a601901-kube-api-access-mmnqc\") pod \"dns-operator-744455d44c-bmvks\" (UID: \"bf0cd343-6643-4463-bb9b-6e291a601901\") " pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.938535 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.939456 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.944876 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wl24\" (UniqueName: \"kubernetes.io/projected/a1f443aa-50c0-4865-b6a3-a07d13b71e73-kube-api-access-9wl24\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.966142 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4skx\" (UniqueName: \"kubernetes.io/projected/9675b92d-1a0c-460b-bbad-cd6abab61f2f-kube-api-access-v4skx\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.979895 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjg8s\" (UniqueName: \"kubernetes.io/projected/d3e3ff22-4547-453f-bd6a-bf8d4098f3a3-kube-api-access-jjg8s\") pod \"migrator-59844c95c7-k5vgf\" (UID: \"d3e3ff22-4547-453f-bd6a-bf8d4098f3a3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.999218 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.032716 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppzfl\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-kube-api-access-ppzfl\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.032780 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-trusted-ca\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.032870 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-certificates\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033008 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-tls\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033058 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-bound-sa-token\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033088 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033111 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07d9a024-6342-42ba-8a0b-4db3aa777a82-config\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033145 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07d9a024-6342-42ba-8a0b-4db3aa777a82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033173 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07d9a024-6342-42ba-8a0b-4db3aa777a82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033201 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/23852c5a-64eb-4a56-8fbb-2e91b16a8429-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033237 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/23852c5a-64eb-4a56-8fbb-2e91b16a8429-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.033679 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.533662017 +0000 UTC m=+151.360224838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.133937 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134045 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnnwc\" (UniqueName: \"kubernetes.io/projected/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-kube-api-access-hnnwc\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134099 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhf2x\" (UniqueName: \"kubernetes.io/projected/cb7bad3c-725d-4a80-b398-140c6acf3825-kube-api-access-rhf2x\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134144 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fa42b50c-59ed-4523-a6a0-994a72ff7071-srv-cert\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134189 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365a9e45-74e9-4231-8ccf-c5fbf200ab83-config-volume\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134211 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45774b89-be22-4692-a944-e5f12f898ea6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6ndmg\" (UID: \"45774b89-be22-4692-a944-e5f12f898ea6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134244 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7ecd4c5-97bd-4190-b474-a745b00d58aa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134286 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-certs\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134339 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1030ed14-9fc1-4ec9-a93c-13eab69320ae-secret-volume\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134439 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-bound-sa-token\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134498 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqqc7\" (UniqueName: \"kubernetes.io/projected/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-kube-api-access-vqqc7\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134521 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-plugins-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134598 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07d9a024-6342-42ba-8a0b-4db3aa777a82-config\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134660 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb8kf\" (UniqueName: \"kubernetes.io/projected/7f89cfdf-2a4d-4582-94f4-e53c45c3e09c-kube-api-access-zb8kf\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp27s\" (UID: \"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134698 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-csi-data-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134743 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-metrics-certs\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134780 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1030ed14-9fc1-4ec9-a93c-13eab69320ae-config-volume\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134804 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fa42b50c-59ed-4523-a6a0-994a72ff7071-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134861 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07d9a024-6342-42ba-8a0b-4db3aa777a82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134888 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2d88\" (UniqueName: \"kubernetes.io/projected/1030ed14-9fc1-4ec9-a93c-13eab69320ae-kube-api-access-p2d88\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134910 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7ecd4c5-97bd-4190-b474-a745b00d58aa-images\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.136451 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-signing-key\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.136625 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/23852c5a-64eb-4a56-8fbb-2e91b16a8429-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.136692 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbvm\" (UniqueName: \"kubernetes.io/projected/70124ff4-00b0-41ef-947d-55eda7af02db-kube-api-access-qnbvm\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.136874 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-stats-auth\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.136957 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-default-certificate\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137063 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjmv7\" (UniqueName: \"kubernetes.io/projected/893bf4c0-7b07-4e49-bff4-9ed7d52b3196-kube-api-access-gjmv7\") pod \"package-server-manager-789f6589d5-pdwpf\" (UID: \"893bf4c0-7b07-4e49-bff4-9ed7d52b3196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137229 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07d9a024-6342-42ba-8a0b-4db3aa777a82-config\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137440 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d69sn\" (UniqueName: \"kubernetes.io/projected/0953ef82-fce5-4008-85c8-b1377a8f66a2-kube-api-access-d69sn\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137487 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-socket-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137547 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb7bad3c-725d-4a80-b398-140c6acf3825-service-ca-bundle\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137587 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c194d09-8a64-45a1-b40b-d1ea249b2626-proxy-tls\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137690 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l87gp\" (UniqueName: \"kubernetes.io/projected/45774b89-be22-4692-a944-e5f12f898ea6-kube-api-access-l87gp\") pod \"multus-admission-controller-857f4d67dd-6ndmg\" (UID: \"45774b89-be22-4692-a944-e5f12f898ea6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.137784 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.637756505 +0000 UTC m=+151.464319356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.138045 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-mountpoint-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.138285 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70124ff4-00b0-41ef-947d-55eda7af02db-tmpfs\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.138354 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.139784 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/23852c5a-64eb-4a56-8fbb-2e91b16a8429-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.140055 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwr85\" (UniqueName: \"kubernetes.io/projected/97299e5b-e1d8-41b0-b1b2-c5658f42a436-kube-api-access-rwr85\") pod \"ingress-canary-g5knd\" (UID: \"97299e5b-e1d8-41b0-b1b2-c5658f42a436\") " pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.140340 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw4sq\" (UniqueName: \"kubernetes.io/projected/cae1c988-06ab-4748-a62d-5bd7301b2c8d-kube-api-access-qw4sq\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.140450 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97299e5b-e1d8-41b0-b1b2-c5658f42a436-cert\") pod \"ingress-canary-g5knd\" (UID: \"97299e5b-e1d8-41b0-b1b2-c5658f42a436\") " pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.140597 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.140755 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0953ef82-fce5-4008-85c8-b1377a8f66a2-serving-cert\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.140992 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-registration-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.141277 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/893bf4c0-7b07-4e49-bff4-9ed7d52b3196-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pdwpf\" (UID: \"893bf4c0-7b07-4e49-bff4-9ed7d52b3196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.141457 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cae1c988-06ab-4748-a62d-5bd7301b2c8d-profile-collector-cert\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.141568 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmx8w\" (UniqueName: \"kubernetes.io/projected/fa42b50c-59ed-4523-a6a0-994a72ff7071-kube-api-access-nmx8w\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.141836 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-tls\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.145708 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-tls\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.145894 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b84g\" (UniqueName: \"kubernetes.io/projected/7b3b0534-3356-446a-91e8-dae980c402db-kube-api-access-2b84g\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.146305 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.146653 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/365a9e45-74e9-4231-8ccf-c5fbf200ab83-metrics-tls\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.147490 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cae1c988-06ab-4748-a62d-5bd7301b2c8d-srv-cert\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.147762 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7ecd4c5-97bd-4190-b474-a745b00d58aa-proxy-tls\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.147862 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.647824141 +0000 UTC m=+151.474387012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148198 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvzz\" (UniqueName: \"kubernetes.io/projected/365a9e45-74e9-4231-8ccf-c5fbf200ab83-kube-api-access-9xvzz\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148411 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70124ff4-00b0-41ef-947d-55eda7af02db-apiservice-cert\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148508 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07d9a024-6342-42ba-8a0b-4db3aa777a82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148569 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70124ff4-00b0-41ef-947d-55eda7af02db-webhook-cert\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148665 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-node-bootstrap-token\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148738 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/23852c5a-64eb-4a56-8fbb-2e91b16a8429-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148813 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0953ef82-fce5-4008-85c8-b1377a8f66a2-config\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148845 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppzfl\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-kube-api-access-ppzfl\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148875 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-trusted-ca\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148935 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56kt\" (UniqueName: \"kubernetes.io/projected/3c194d09-8a64-45a1-b40b-d1ea249b2626-kube-api-access-k56kt\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148986 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-signing-cabundle\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.149007 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c194d09-8a64-45a1-b40b-d1ea249b2626-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.149037 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-certificates\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.149141 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f89cfdf-2a4d-4582-94f4-e53c45c3e09c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp27s\" (UID: \"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.149201 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh4z6\" (UniqueName: \"kubernetes.io/projected/99828525-9397-448d-9a51-bc0da88038ac-kube-api-access-dh4z6\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.149288 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfnc4\" (UniqueName: \"kubernetes.io/projected/a7ecd4c5-97bd-4190-b474-a745b00d58aa-kube-api-access-qfnc4\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.153817 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-certificates\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.187647 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-bound-sa-token\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.187813 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.193383 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.200535 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.208712 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.213091 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppzfl\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-kube-api-access-ppzfl\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.213699 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" event={"ID":"010ced82-1614-4ade-958b-d12ea6cda1b9","Type":"ContainerStarted","Data":"90508a917965ce10b3d4539dd69bf2e241090c233c30aed866c0f42e7f9c8edc"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.213767 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" event={"ID":"010ced82-1614-4ade-958b-d12ea6cda1b9","Type":"ContainerStarted","Data":"f56464798a61acc321f66cc28ebe165c756661bcb8e2a9030542e805fc8e8973"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.215156 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" event={"ID":"5691988c-c881-437e-aa60-317e424b3170","Type":"ContainerStarted","Data":"e056faac79cfd44ea89bb530737dab60b57099a92098fe4179cd9da6f2585435"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.216555 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" event={"ID":"81ef78af-dc11-4231-9693-eb088718d103","Type":"ContainerStarted","Data":"6aa4b8f2068d7c22817241bf474ef76faf5c50ef5705a0334899bfa519f7cac2"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.217642 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" event={"ID":"0a8b060f-1416-4676-af77-45c0b411ff59","Type":"ContainerStarted","Data":"d077cd20b8c092fe39dd142d804b7246ab2b6571d885765fed2cce619176de8c"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.219343 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" event={"ID":"be08d23e-d6c9-4b42-904b-c36b05dfc316","Type":"ContainerStarted","Data":"9cef7e212ac2841b128f86d6ec36fe2a3490809adf860dd313b564257c0ad99b"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.220725 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" event={"ID":"d061f6d6-1983-405d-93af-3e492ff49f7c","Type":"ContainerStarted","Data":"92eb3ea915f09fd028998d05f1f049bc1e5781547f5807090433223897100c78"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.224231 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" event={"ID":"c731b198-314f-46a9-ad13-a4cc6c7bab94","Type":"ContainerStarted","Data":"7799c0504e8d1fffa9f0bc7d67e2c326156afaed4cf1d61765ba9e47c7794587"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.227635 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hpz9q" event={"ID":"25061ce4-ca31-4da7-ad36-c6535e1d2028","Type":"ContainerStarted","Data":"d3f3fdbd322417bb30c50dd78af3aba0532e0b870081cb8ae4572d5015d144e6"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.228486 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" event={"ID":"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71","Type":"ContainerStarted","Data":"22ebcfd1c51c5e05131ab99ff373fbefb60df0542ade3322f4db099d62fbcab9"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.229998 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" event={"ID":"a3b95c03-1b0d-4c06-bb85-2f9ed127737b","Type":"ContainerStarted","Data":"b7e3372169d8ed5c188bb717f6a1c8906c055796b66786e1124e3c02bd76e20f"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.230202 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.231548 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" event={"ID":"218b57d8-c3a3-4a33-a3ef-6701cf557911","Type":"ContainerStarted","Data":"9b3f84cabb73fc20ad9534b981fb6e0a0313d0785c99dd0d15c0f9cdc6e4debe"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.232498 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" event={"ID":"ebf5f82e-2a14-49d9-b670-59ed73e71203","Type":"ContainerStarted","Data":"91cfdcde5ecb33c60f3342cf5501d1b216c7e5139e2f48c5721944a5c98e3ec2"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.233794 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" event={"ID":"44d556c9-6c8e-45d3-bec8-303081e8c4e1","Type":"ContainerStarted","Data":"0e823a46854aa252fe9015e01e9cddb6f75ae7ba4ce62f7d7338ee347ff378f1"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.235032 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b5t5k" event={"ID":"c7f9c411-3899-4824-a051-b18ad42a950e","Type":"ContainerStarted","Data":"0c4fa24c07af4cdb6a65715225f501e2d489d532f902d5a36a0225bc9b457962"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250570 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250707 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250734 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0953ef82-fce5-4008-85c8-b1377a8f66a2-serving-cert\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250765 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/893bf4c0-7b07-4e49-bff4-9ed7d52b3196-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pdwpf\" (UID: \"893bf4c0-7b07-4e49-bff4-9ed7d52b3196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250790 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-registration-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.250868 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.750832638 +0000 UTC m=+151.577395499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250952 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cae1c988-06ab-4748-a62d-5bd7301b2c8d-profile-collector-cert\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250996 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmx8w\" (UniqueName: \"kubernetes.io/projected/fa42b50c-59ed-4523-a6a0-994a72ff7071-kube-api-access-nmx8w\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251049 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-registration-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251048 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b84g\" (UniqueName: \"kubernetes.io/projected/7b3b0534-3356-446a-91e8-dae980c402db-kube-api-access-2b84g\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251106 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251129 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/365a9e45-74e9-4231-8ccf-c5fbf200ab83-metrics-tls\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251151 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cae1c988-06ab-4748-a62d-5bd7301b2c8d-srv-cert\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251168 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7ecd4c5-97bd-4190-b474-a745b00d58aa-proxy-tls\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251188 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvzz\" (UniqueName: \"kubernetes.io/projected/365a9e45-74e9-4231-8ccf-c5fbf200ab83-kube-api-access-9xvzz\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251204 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70124ff4-00b0-41ef-947d-55eda7af02db-apiservice-cert\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251226 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70124ff4-00b0-41ef-947d-55eda7af02db-webhook-cert\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251244 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-node-bootstrap-token\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251294 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0953ef82-fce5-4008-85c8-b1377a8f66a2-config\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251322 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k56kt\" (UniqueName: \"kubernetes.io/projected/3c194d09-8a64-45a1-b40b-d1ea249b2626-kube-api-access-k56kt\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251353 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-signing-cabundle\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251374 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c194d09-8a64-45a1-b40b-d1ea249b2626-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251404 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f89cfdf-2a4d-4582-94f4-e53c45c3e09c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp27s\" (UID: \"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251426 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh4z6\" (UniqueName: \"kubernetes.io/projected/99828525-9397-448d-9a51-bc0da88038ac-kube-api-access-dh4z6\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251450 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfnc4\" (UniqueName: \"kubernetes.io/projected/a7ecd4c5-97bd-4190-b474-a745b00d58aa-kube-api-access-qfnc4\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251470 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnnwc\" (UniqueName: \"kubernetes.io/projected/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-kube-api-access-hnnwc\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251513 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhf2x\" (UniqueName: \"kubernetes.io/projected/cb7bad3c-725d-4a80-b398-140c6acf3825-kube-api-access-rhf2x\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251532 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fa42b50c-59ed-4523-a6a0-994a72ff7071-srv-cert\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251553 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365a9e45-74e9-4231-8ccf-c5fbf200ab83-config-volume\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251573 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45774b89-be22-4692-a944-e5f12f898ea6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6ndmg\" (UID: \"45774b89-be22-4692-a944-e5f12f898ea6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251590 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-certs\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251610 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7ecd4c5-97bd-4190-b474-a745b00d58aa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251636 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1030ed14-9fc1-4ec9-a93c-13eab69320ae-secret-volume\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251665 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqqc7\" (UniqueName: \"kubernetes.io/projected/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-kube-api-access-vqqc7\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251683 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-plugins-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251705 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb8kf\" (UniqueName: \"kubernetes.io/projected/7f89cfdf-2a4d-4582-94f4-e53c45c3e09c-kube-api-access-zb8kf\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp27s\" (UID: \"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251724 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-csi-data-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251749 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-metrics-certs\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251770 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fa42b50c-59ed-4523-a6a0-994a72ff7071-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251791 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1030ed14-9fc1-4ec9-a93c-13eab69320ae-config-volume\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251815 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2d88\" (UniqueName: \"kubernetes.io/projected/1030ed14-9fc1-4ec9-a93c-13eab69320ae-kube-api-access-p2d88\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251834 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7ecd4c5-97bd-4190-b474-a745b00d58aa-images\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251855 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-signing-key\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251935 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-plugins-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.253376 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c194d09-8a64-45a1-b40b-d1ea249b2626-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.253977 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7ecd4c5-97bd-4190-b474-a745b00d58aa-images\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnbvm\" (UniqueName: \"kubernetes.io/projected/70124ff4-00b0-41ef-947d-55eda7af02db-kube-api-access-qnbvm\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254344 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-stats-auth\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254370 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-default-certificate\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254387 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjmv7\" (UniqueName: \"kubernetes.io/projected/893bf4c0-7b07-4e49-bff4-9ed7d52b3196-kube-api-access-gjmv7\") pod \"package-server-manager-789f6589d5-pdwpf\" (UID: \"893bf4c0-7b07-4e49-bff4-9ed7d52b3196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254407 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d69sn\" (UniqueName: \"kubernetes.io/projected/0953ef82-fce5-4008-85c8-b1377a8f66a2-kube-api-access-d69sn\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.254474 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.75442812 +0000 UTC m=+151.580990971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254567 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-socket-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254621 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb7bad3c-725d-4a80-b398-140c6acf3825-service-ca-bundle\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254691 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c194d09-8a64-45a1-b40b-d1ea249b2626-proxy-tls\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254743 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-mountpoint-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254783 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l87gp\" (UniqueName: \"kubernetes.io/projected/45774b89-be22-4692-a944-e5f12f898ea6-kube-api-access-l87gp\") pod \"multus-admission-controller-857f4d67dd-6ndmg\" (UID: \"45774b89-be22-4692-a944-e5f12f898ea6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254852 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70124ff4-00b0-41ef-947d-55eda7af02db-tmpfs\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254889 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.258649 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cae1c988-06ab-4748-a62d-5bd7301b2c8d-profile-collector-cert\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.258959 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365a9e45-74e9-4231-8ccf-c5fbf200ab83-config-volume\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.259208 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7ecd4c5-97bd-4190-b474-a745b00d58aa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.259435 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1030ed14-9fc1-4ec9-a93c-13eab69320ae-config-volume\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254933 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwr85\" (UniqueName: \"kubernetes.io/projected/97299e5b-e1d8-41b0-b1b2-c5658f42a436-kube-api-access-rwr85\") pod \"ingress-canary-g5knd\" (UID: \"97299e5b-e1d8-41b0-b1b2-c5658f42a436\") " pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.260139 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97299e5b-e1d8-41b0-b1b2-c5658f42a436-cert\") pod \"ingress-canary-g5knd\" (UID: \"97299e5b-e1d8-41b0-b1b2-c5658f42a436\") " pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.260632 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw4sq\" (UniqueName: \"kubernetes.io/projected/cae1c988-06ab-4748-a62d-5bd7301b2c8d-kube-api-access-qw4sq\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.261048 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-mountpoint-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.261321 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-socket-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.261398 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-signing-cabundle\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.261599 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-csi-data-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.262551 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70124ff4-00b0-41ef-947d-55eda7af02db-tmpfs\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.262564 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0953ef82-fce5-4008-85c8-b1377a8f66a2-config\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.263349 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb7bad3c-725d-4a80-b398-140c6acf3825-service-ca-bundle\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.264206 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7ecd4c5-97bd-4190-b474-a745b00d58aa-proxy-tls\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.264365 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-signing-key\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.264396 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/365a9e45-74e9-4231-8ccf-c5fbf200ab83-metrics-tls\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.265020 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.265191 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cae1c988-06ab-4748-a62d-5bd7301b2c8d-srv-cert\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.266086 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.266771 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f89cfdf-2a4d-4582-94f4-e53c45c3e09c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp27s\" (UID: \"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.267012 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-stats-auth\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.267586 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-metrics-certs\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.268749 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45774b89-be22-4692-a944-e5f12f898ea6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6ndmg\" (UID: \"45774b89-be22-4692-a944-e5f12f898ea6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.268972 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-default-certificate\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.269762 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/893bf4c0-7b07-4e49-bff4-9ed7d52b3196-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pdwpf\" (UID: \"893bf4c0-7b07-4e49-bff4-9ed7d52b3196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.271370 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1030ed14-9fc1-4ec9-a93c-13eab69320ae-secret-volume\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.285677 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97299e5b-e1d8-41b0-b1b2-c5658f42a436-cert\") pod \"ingress-canary-g5knd\" (UID: \"97299e5b-e1d8-41b0-b1b2-c5658f42a436\") " pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.286209 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70124ff4-00b0-41ef-947d-55eda7af02db-webhook-cert\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.286328 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-node-bootstrap-token\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.286371 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0953ef82-fce5-4008-85c8-b1377a8f66a2-serving-cert\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.286537 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07d9a024-6342-42ba-8a0b-4db3aa777a82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.286721 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-trusted-ca\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.287147 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-certs\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.287592 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fa42b50c-59ed-4523-a6a0-994a72ff7071-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.287966 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b84g\" (UniqueName: \"kubernetes.io/projected/7b3b0534-3356-446a-91e8-dae980c402db-kube-api-access-2b84g\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.288666 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/23852c5a-64eb-4a56-8fbb-2e91b16a8429-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.288894 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70124ff4-00b0-41ef-947d-55eda7af02db-apiservice-cert\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.289866 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07d9a024-6342-42ba-8a0b-4db3aa777a82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.290892 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fa42b50c-59ed-4523-a6a0-994a72ff7071-srv-cert\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.292926 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c194d09-8a64-45a1-b40b-d1ea249b2626-proxy-tls\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.310194 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmx8w\" (UniqueName: \"kubernetes.io/projected/fa42b50c-59ed-4523-a6a0-994a72ff7071-kube-api-access-nmx8w\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.324064 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhf2x\" (UniqueName: \"kubernetes.io/projected/cb7bad3c-725d-4a80-b398-140c6acf3825-kube-api-access-rhf2x\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.326825 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.345568 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvzz\" (UniqueName: \"kubernetes.io/projected/365a9e45-74e9-4231-8ccf-c5fbf200ab83-kube-api-access-9xvzz\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.361963 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.362479 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.86245351 +0000 UTC m=+151.689016331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.364173 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.364554 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.86454495 +0000 UTC m=+151.691107771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.369799 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2d88\" (UniqueName: \"kubernetes.io/projected/1030ed14-9fc1-4ec9-a93c-13eab69320ae-kube-api-access-p2d88\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.386163 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb8kf\" (UniqueName: \"kubernetes.io/projected/7f89cfdf-2a4d-4582-94f4-e53c45c3e09c-kube-api-access-zb8kf\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp27s\" (UID: \"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.400756 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnnwc\" (UniqueName: \"kubernetes.io/projected/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-kube-api-access-hnnwc\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.421075 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh4z6\" (UniqueName: \"kubernetes.io/projected/99828525-9397-448d-9a51-bc0da88038ac-kube-api-access-dh4z6\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.445000 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d69sn\" (UniqueName: \"kubernetes.io/projected/0953ef82-fce5-4008-85c8-b1377a8f66a2-kube-api-access-d69sn\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.461129 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfnc4\" (UniqueName: \"kubernetes.io/projected/a7ecd4c5-97bd-4190-b474-a745b00d58aa-kube-api-access-qfnc4\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.469519 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.469956 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.969942405 +0000 UTC m=+151.796505226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.484376 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjmv7\" (UniqueName: \"kubernetes.io/projected/893bf4c0-7b07-4e49-bff4-9ed7d52b3196-kube-api-access-gjmv7\") pod \"package-server-manager-789f6589d5-pdwpf\" (UID: \"893bf4c0-7b07-4e49-bff4-9ed7d52b3196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.516720 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.541030 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqqc7\" (UniqueName: \"kubernetes.io/projected/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-kube-api-access-vqqc7\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.546745 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.559609 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.566121 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.567466 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnbvm\" (UniqueName: \"kubernetes.io/projected/70124ff4-00b0-41ef-947d-55eda7af02db-kube-api-access-qnbvm\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.570854 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.571307 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.071292285 +0000 UTC m=+151.897855106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.575953 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.580999 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l87gp\" (UniqueName: \"kubernetes.io/projected/45774b89-be22-4692-a944-e5f12f898ea6-kube-api-access-l87gp\") pod \"multus-admission-controller-857f4d67dd-6ndmg\" (UID: \"45774b89-be22-4692-a944-e5f12f898ea6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.581385 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.587990 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k56kt\" (UniqueName: \"kubernetes.io/projected/3c194d09-8a64-45a1-b40b-d1ea249b2626-kube-api-access-k56kt\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.589676 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.599041 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.601095 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf"] Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.601173 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw4sq\" (UniqueName: \"kubernetes.io/projected/cae1c988-06ab-4748-a62d-5bd7301b2c8d-kube-api-access-qw4sq\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.604950 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.614314 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.618039 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwr85\" (UniqueName: \"kubernetes.io/projected/97299e5b-e1d8-41b0-b1b2-c5658f42a436-kube-api-access-rwr85\") pod \"ingress-canary-g5knd\" (UID: \"97299e5b-e1d8-41b0-b1b2-c5658f42a436\") " pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.621204 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.634230 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.641487 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.649363 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.657349 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.673010 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.673672 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.173650234 +0000 UTC m=+152.000213055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: W0128 18:15:40.674454 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3e3ff22_4547_453f_bd6a_bf8d4098f3a3.slice/crio-3d08275f7d255075cf5051dcfbc6e0d3d24d15b16d7a2d77c2254bdf95636304 WatchSource:0}: Error finding container 3d08275f7d255075cf5051dcfbc6e0d3d24d15b16d7a2d77c2254bdf95636304: Status 404 returned error can't find the container with id 3d08275f7d255075cf5051dcfbc6e0d3d24d15b16d7a2d77c2254bdf95636304 Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.674565 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.775390 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.775796 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.275775426 +0000 UTC m=+152.102338247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.858895 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.879691 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.879901 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.379877635 +0000 UTC m=+152.206440456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.881866 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.882286 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.382269853 +0000 UTC m=+152.208832674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.983897 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.984237 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.48422293 +0000 UTC m=+152.310785751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.987816 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.087767 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.088355 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.588331708 +0000 UTC m=+152.414894589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.155212 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab37c3ff_de29_4cba_8c5b_83d4fdca736c.slice/crio-b9ea7903d1ee21f12aa1d5dc224da3033337c485d9b2d1882bb9a7756312ae0d WatchSource:0}: Error finding container b9ea7903d1ee21f12aa1d5dc224da3033337c485d9b2d1882bb9a7756312ae0d: Status 404 returned error can't find the container with id b9ea7903d1ee21f12aa1d5dc224da3033337c485d9b2d1882bb9a7756312ae0d Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.190078 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.190129 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.190240 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.190652 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.690634136 +0000 UTC m=+152.517196957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.196150 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.251525 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" event={"ID":"9675b92d-1a0c-460b-bbad-cd6abab61f2f","Type":"ContainerStarted","Data":"c359257c5b550240d6932b83414ea782aee988a80cd656c2b3c664f14ea5664d"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.252546 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.291813 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.292938 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.792925963 +0000 UTC m=+152.619488774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.295494 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" event={"ID":"d061f6d6-1983-405d-93af-3e492ff49f7c","Type":"ContainerStarted","Data":"4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.295521 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qnrsp" event={"ID":"cb7bad3c-725d-4a80-b398-140c6acf3825","Type":"ContainerStarted","Data":"0d1b21b030c24fdc6bba830677624d21cbb5cf6e3e7d4ae74ad81460cf48c5d3"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.295534 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bmvks"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.295550 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hk2lj"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.319453 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" event={"ID":"a3b95c03-1b0d-4c06-bb85-2f9ed127737b","Type":"ContainerStarted","Data":"2bcc0ea57ad00fb5d19d309b535cd61c28cf5580d0d5cb443d19f13fe3299db4"} Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.320205 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1f443aa_50c0_4865_b6a3_a07d13b71e73.slice/crio-b14f2a4c9cd7fb735af9bff29aed181fc4521a7bc2ac7c7d8e7924e42122fb4b WatchSource:0}: Error finding container b14f2a4c9cd7fb735af9bff29aed181fc4521a7bc2ac7c7d8e7924e42122fb4b: Status 404 returned error can't find the container with id b14f2a4c9cd7fb735af9bff29aed181fc4521a7bc2ac7c7d8e7924e42122fb4b Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.326627 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" event={"ID":"218b57d8-c3a3-4a33-a3ef-6701cf557911","Type":"ContainerStarted","Data":"61f4f9cfcfb91c7e2b3605826caa8b868277c5073550dd802503532a73b730ed"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.347415 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2lzzr" event={"ID":"ab37c3ff-de29-4cba-8c5b-83d4fdca736c","Type":"ContainerStarted","Data":"b9ea7903d1ee21f12aa1d5dc224da3033337c485d9b2d1882bb9a7756312ae0d"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.364699 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.364740 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" event={"ID":"d3e3ff22-4547-453f-bd6a-bf8d4098f3a3","Type":"ContainerStarted","Data":"3d08275f7d255075cf5051dcfbc6e0d3d24d15b16d7a2d77c2254bdf95636304"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.372794 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j6799"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.388464 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9l594"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.393499 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.394052 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.894033006 +0000 UTC m=+152.720595827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.397091 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.397143 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" event={"ID":"44d556c9-6c8e-45d3-bec8-303081e8c4e1","Type":"ContainerStarted","Data":"d7be33ff5b68db551839a7b0619faeeabeb41fe748eb7a18f2e5916375270548"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.404690 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.404724 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.407086 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" event={"ID":"715ad1e8-6659-4a18-a007-ad31ffa7044e","Type":"ContainerStarted","Data":"53006daf2106b60c7535f2e694eae0c2301a9a6300755e25161feabe1eba81f5"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.412681 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk"] Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.435879 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf0cd343_6643_4463_bb9b_6e291a601901.slice/crio-caa62707f85deaf8041e0f0a5513e4852c113f25dbe7abcf71c9c5125e88148d WatchSource:0}: Error finding container caa62707f85deaf8041e0f0a5513e4852c113f25dbe7abcf71c9c5125e88148d: Status 404 returned error can't find the container with id caa62707f85deaf8041e0f0a5513e4852c113f25dbe7abcf71c9c5125e88148d Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.441504 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa42b50c_59ed_4523_a6a0_994a72ff7071.slice/crio-f476b82bfa300d243f1a834e322509e00a075abd07e9dd5cafcffe28352ce983 WatchSource:0}: Error finding container f476b82bfa300d243f1a834e322509e00a075abd07e9dd5cafcffe28352ce983: Status 404 returned error can't find the container with id f476b82bfa300d243f1a834e322509e00a075abd07e9dd5cafcffe28352ce983 Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.444421 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb632812_bc0d_41f2_9c01_a19d40eb69be.slice/crio-06ea3a26c30303ff3eca9897196cc2f61f7d491aa305689f190b290e65b077b1 WatchSource:0}: Error finding container 06ea3a26c30303ff3eca9897196cc2f61f7d491aa305689f190b290e65b077b1: Status 404 returned error can't find the container with id 06ea3a26c30303ff3eca9897196cc2f61f7d491aa305689f190b290e65b077b1 Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.457593 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0e8632e_effa_4fe6_ac4d_8c33abe6eecc.slice/crio-cc1dc4b5f899165076bd1518a496186f91a05dc16df07043c514bf2001990eea WatchSource:0}: Error finding container cc1dc4b5f899165076bd1518a496186f91a05dc16df07043c514bf2001990eea: Status 404 returned error can't find the container with id cc1dc4b5f899165076bd1518a496186f91a05dc16df07043c514bf2001990eea Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.462738 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07d9a024_6342_42ba_8a0b_4db3aa777a82.slice/crio-8cc07f66cf536e3734e4900c2af95da4824702d6e0f524b29e0e3e1b219425ce WatchSource:0}: Error finding container 8cc07f66cf536e3734e4900c2af95da4824702d6e0f524b29e0e3e1b219425ce: Status 404 returned error can't find the container with id 8cc07f66cf536e3734e4900c2af95da4824702d6e0f524b29e0e3e1b219425ce Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.495082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.496310 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.996282812 +0000 UTC m=+152.822845783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.548858 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fzzsl"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.555368 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.561051 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6ndmg"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.573359 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b5wzm"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.586770 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.592579 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx"] Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.596464 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.096444558 +0000 UTC m=+152.923007369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.596834 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.597291 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.597682 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.097665773 +0000 UTC m=+152.924228594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.599286 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.602448 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.699326 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.699600 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.199559329 +0000 UTC m=+153.026122190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.699734 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.700561 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.200507056 +0000 UTC m=+153.027069897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.715744 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5zj27"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.788579 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-g5knd"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.800890 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.801458 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.301438034 +0000 UTC m=+153.128000855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.809105 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e4812cb_3dc4_4d34_b24d_fd5f4a507030.slice/crio-1f3a4d858555971484a5fd1f6b2765de86aeaee9e636d4ebbc11f61ac9f47cf0 WatchSource:0}: Error finding container 1f3a4d858555971484a5fd1f6b2765de86aeaee9e636d4ebbc11f61ac9f47cf0: Status 404 returned error can't find the container with id 1f3a4d858555971484a5fd1f6b2765de86aeaee9e636d4ebbc11f61ac9f47cf0 Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.819626 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.822601 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fn9d5"] Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.838376 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97299e5b_e1d8_41b0_b1b2_c5658f42a436.slice/crio-0211f909758d157e47d98a4656be1ad4ffedcc85e0b1a95b92ae4be01693eb00 WatchSource:0}: Error finding container 0211f909758d157e47d98a4656be1ad4ffedcc85e0b1a95b92ae4be01693eb00: Status 404 returned error can't find the container with id 0211f909758d157e47d98a4656be1ad4ffedcc85e0b1a95b92ae4be01693eb00 Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.902317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.904654 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.404636857 +0000 UTC m=+153.231199668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.003613 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.004060 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.504039771 +0000 UTC m=+153.330602592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.105182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.105739 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.105823 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.106873 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.606856933 +0000 UTC m=+153.433419754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.110448 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.172640 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.177891 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" podStartSLOduration=122.17787069 podStartE2EDuration="2m2.17787069s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:42.160055165 +0000 UTC m=+152.986617986" watchObservedRunningTime="2026-01-28 18:15:42.17787069 +0000 UTC m=+153.004433511" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.206824 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.207092 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.207164 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.207370 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.707355038 +0000 UTC m=+153.533917849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.216759 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.216920 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.227637 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.309321 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.809300445 +0000 UTC m=+153.635863276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.308855 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.411201 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.411392 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.911364116 +0000 UTC m=+153.737926937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.411516 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.412035 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.912028095 +0000 UTC m=+153.738590916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.425313 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-g5knd" event={"ID":"97299e5b-e1d8-41b0-b1b2-c5658f42a436","Type":"ContainerStarted","Data":"0211f909758d157e47d98a4656be1ad4ffedcc85e0b1a95b92ae4be01693eb00"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.430130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" event={"ID":"cae1c988-06ab-4748-a62d-5bd7301b2c8d","Type":"ContainerStarted","Data":"0d9d752a79dcaf04cf8b3f62e0482bd30919b4e1ceebcc26a5724adbdcde76a1"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.466201 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hpz9q" event={"ID":"25061ce4-ca31-4da7-ad36-c6535e1d2028","Type":"ContainerStarted","Data":"996f5a4f85f66ed4a659b1f3b977d305f1391958d42cde202ba973eed4ede77b"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.473770 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b5t5k" event={"ID":"c7f9c411-3899-4824-a051-b18ad42a950e","Type":"ContainerStarted","Data":"943b5760deb612fe5b4be1e63f359ae8850d9ab9f8d1a6ec8e6e298f7bb9f887"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.476310 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" event={"ID":"c08b13aa-cae7-420a-ae3b-4846ea74c5c8","Type":"ContainerStarted","Data":"1f99fac7cfe9e10b3503c2a47c0d78631d7f3448f9cc0f1b7d7d9f5215af91e8"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.476347 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" event={"ID":"c08b13aa-cae7-420a-ae3b-4846ea74c5c8","Type":"ContainerStarted","Data":"779328749c2fe35763334e5d9a6d775dfa61fdd788471c68340a7e74e8c74c4d"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.481441 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" event={"ID":"5691988c-c881-437e-aa60-317e424b3170","Type":"ContainerStarted","Data":"e23e2068516c6cb6fab9f98ec03fc1a5d04d167dd1269b4b9055e1fb8f017cd4"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.483821 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" event={"ID":"0a8b060f-1416-4676-af77-45c0b411ff59","Type":"ContainerStarted","Data":"523379a35a8f4358688b7a5f6c4206a08b1dd03849c444c85977a9d32ca697f0"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.488869 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fn9d5" event={"ID":"365a9e45-74e9-4231-8ccf-c5fbf200ab83","Type":"ContainerStarted","Data":"1b05901e0da1ee81f48449495269b7562be0c2e9e483b87c4525f64d493bf952"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.489659 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.490294 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" event={"ID":"07d9a024-6342-42ba-8a0b-4db3aa777a82","Type":"ContainerStarted","Data":"8cc07f66cf536e3734e4900c2af95da4824702d6e0f524b29e0e3e1b219425ce"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.492865 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j6799" event={"ID":"db632812-bc0d-41f2-9c01-a19d40eb69be","Type":"ContainerStarted","Data":"06ea3a26c30303ff3eca9897196cc2f61f7d491aa305689f190b290e65b077b1"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.494114 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" event={"ID":"1030ed14-9fc1-4ec9-a93c-13eab69320ae","Type":"ContainerStarted","Data":"8f93ab89ce3c6adab00c97ddb3618e2ccd297812e80918e595461de298f590fd"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.497114 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" event={"ID":"0953ef82-fce5-4008-85c8-b1377a8f66a2","Type":"ContainerStarted","Data":"6b6caec17afe76097b2fb413b8a01b0e5c28dd94270f42c5f88caef2787cd35b"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.499111 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" event={"ID":"9675b92d-1a0c-460b-bbad-cd6abab61f2f","Type":"ContainerStarted","Data":"88b597bfd1be0f2e24ec28bda9f4ca5f3afea78ad15dcd45bf22a2c4177227af"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.500172 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" event={"ID":"893bf4c0-7b07-4e49-bff4-9ed7d52b3196","Type":"ContainerStarted","Data":"24ea991929f5691447c508e8f97362e7755d0ee1ce0c8580e35c8f94a2adf371"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.501302 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" event={"ID":"bf0cd343-6643-4463-bb9b-6e291a601901","Type":"ContainerStarted","Data":"caa62707f85deaf8041e0f0a5513e4852c113f25dbe7abcf71c9c5125e88148d"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.506062 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.508782 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" event={"ID":"0e4812cb-3dc4-4d34-b24d-fd5f4a507030","Type":"ContainerStarted","Data":"1f3a4d858555971484a5fd1f6b2765de86aeaee9e636d4ebbc11f61ac9f47cf0"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.511396 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" event={"ID":"fa6948a7-6763-4c03-b6f9-ecfb38a8a064","Type":"ContainerStarted","Data":"d3c44d232afd74c9f45fb63de97eaa472860c9005aa243d3ffbc79ecd22cf1a4"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.512375 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.513941 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.01391641 +0000 UTC m=+153.840479231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.516938 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" event={"ID":"70124ff4-00b0-41ef-947d-55eda7af02db","Type":"ContainerStarted","Data":"1792476aa41bf09e5e86911b6b959eba4b9cb5a4e90cc3cf9dfa1d77a0efc8b8"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.519335 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" event={"ID":"a1f443aa-50c0-4865-b6a3-a07d13b71e73","Type":"ContainerStarted","Data":"278abbe234a99ae7d3fd7712408ef7fdb0486f4826017a922229bd744bed9a2c"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.519374 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" event={"ID":"a1f443aa-50c0-4865-b6a3-a07d13b71e73","Type":"ContainerStarted","Data":"b14f2a4c9cd7fb735af9bff29aed181fc4521a7bc2ac7c7d8e7924e42122fb4b"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.519905 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-b5t5k" podStartSLOduration=122.51989344 podStartE2EDuration="2m2.51989344s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:42.517454531 +0000 UTC m=+153.344017362" watchObservedRunningTime="2026-01-28 18:15:42.51989344 +0000 UTC m=+153.346456261" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.521544 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" event={"ID":"7b3b0534-3356-446a-91e8-dae980c402db","Type":"ContainerStarted","Data":"1e7f0e57b01f1d7574c6a758c09ab0d8248fafcd79d2a77c1cd5931c1c715640"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.524752 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" event={"ID":"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71","Type":"ContainerStarted","Data":"0a1cca030e7898a383fe11062638bfb92a0213efb9d089d5970baf9937a9fc55"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.526439 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" event={"ID":"45774b89-be22-4692-a944-e5f12f898ea6","Type":"ContainerStarted","Data":"e0d46af3685c149a5fcf5dec6a551c09120577182a6d1300402cb740e9ceb3af"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.530976 4985 generic.go:334] "Generic (PLEG): container finished" podID="c731b198-314f-46a9-ad13-a4cc6c7bab94" containerID="47e07904cc0955f8b324534c75aef4da5048843872e9f33590c74115e848c24b" exitCode=0 Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.531044 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" event={"ID":"c731b198-314f-46a9-ad13-a4cc6c7bab94","Type":"ContainerDied","Data":"47e07904cc0955f8b324534c75aef4da5048843872e9f33590c74115e848c24b"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.534062 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qnrsp" event={"ID":"cb7bad3c-725d-4a80-b398-140c6acf3825","Type":"ContainerStarted","Data":"8451ecb74d3c5ee99cec821aaa47c7970df959ecd8df15b6c7cf52a433376f5a"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.538200 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2lzzr" event={"ID":"ab37c3ff-de29-4cba-8c5b-83d4fdca736c","Type":"ContainerStarted","Data":"0dc1f292c7d223f611f63a9e0459a2a01432e443a858dfa4c18bbd7496a7fff4"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.541063 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" event={"ID":"715ad1e8-6659-4a18-a007-ad31ffa7044e","Type":"ContainerStarted","Data":"feb43603996825516cfc092bf0fad2145b414d6c0a264d5677b6af0e016c8ef8"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.542121 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" event={"ID":"3c194d09-8a64-45a1-b40b-d1ea249b2626","Type":"ContainerStarted","Data":"fcada611c3b3fe11486c3124fe3827a048fce22bf393cdde0e55e9fae605803b"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.543418 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" event={"ID":"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c","Type":"ContainerStarted","Data":"81cd0e0ccfddd850ed46ee2c16fe85c0d0c6bcf7c2090b607ffd1f44455d8136"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.544838 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" event={"ID":"a7ecd4c5-97bd-4190-b474-a745b00d58aa","Type":"ContainerStarted","Data":"5314169b85f4c91b8842227e9762a819a4bd8e7cc2993af76830dd293d144cdb"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.548235 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" event={"ID":"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc","Type":"ContainerStarted","Data":"cc1dc4b5f899165076bd1518a496186f91a05dc16df07043c514bf2001990eea"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.551142 4985 generic.go:334] "Generic (PLEG): container finished" podID="ebf5f82e-2a14-49d9-b670-59ed73e71203" containerID="b6ccf435f06be325066da899de8006e2145eae58ef5b8e46d92c0cab3d64ce9d" exitCode=0 Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.551214 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" event={"ID":"ebf5f82e-2a14-49d9-b670-59ed73e71203","Type":"ContainerDied","Data":"b6ccf435f06be325066da899de8006e2145eae58ef5b8e46d92c0cab3d64ce9d"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.567623 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"6b10ed763b169d6f532181c8d5b22f9153351cfca621d39432cf510addeb355d"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.570695 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" event={"ID":"81ef78af-dc11-4231-9693-eb088718d103","Type":"ContainerStarted","Data":"c6ab429d720c37e702d53f4e9a0f44ef39cfc027fff063215df4736dace96d76"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.573087 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" event={"ID":"fa42b50c-59ed-4523-a6a0-994a72ff7071","Type":"ContainerStarted","Data":"f476b82bfa300d243f1a834e322509e00a075abd07e9dd5cafcffe28352ce983"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.573502 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.573540 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.575697 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xqdzz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.575742 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.579302 4985 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fdfqq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.579365 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.602782 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" podStartSLOduration=122.602763315 podStartE2EDuration="2m2.602763315s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:42.602286432 +0000 UTC m=+153.428849253" watchObservedRunningTime="2026-01-28 18:15:42.602763315 +0000 UTC m=+153.429326136" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.615339 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.615967 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.11594945 +0000 UTC m=+153.942512271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.642204 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podStartSLOduration=122.642183135 podStartE2EDuration="2m2.642183135s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:42.641557508 +0000 UTC m=+153.468120319" watchObservedRunningTime="2026-01-28 18:15:42.642183135 +0000 UTC m=+153.468745946" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.703054 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" podStartSLOduration=122.703024704 podStartE2EDuration="2m2.703024704s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:42.700075031 +0000 UTC m=+153.526637852" watchObservedRunningTime="2026-01-28 18:15:42.703024704 +0000 UTC m=+153.529587525" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.716533 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.716702 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.216685843 +0000 UTC m=+154.043248664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.717484 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.718322 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.218280608 +0000 UTC m=+154.044843439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.836269 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.841350 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.341315614 +0000 UTC m=+154.167878625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.945963 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.946923 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.446898865 +0000 UTC m=+154.273461686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.049128 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.050612 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.550568961 +0000 UTC m=+154.377131782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.055193 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.055669 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.555654015 +0000 UTC m=+154.382216836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.156462 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.157587 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.657565012 +0000 UTC m=+154.484127833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.259181 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.259824 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.759811697 +0000 UTC m=+154.586374518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.360609 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.361150 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.861116726 +0000 UTC m=+154.687679547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: W0128 18:15:43.402455 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-f2550561034c8e4a230b7ed2e9f23ce77cc81984f50d283d56fec33e8fc739fb WatchSource:0}: Error finding container f2550561034c8e4a230b7ed2e9f23ce77cc81984f50d283d56fec33e8fc739fb: Status 404 returned error can't find the container with id f2550561034c8e4a230b7ed2e9f23ce77cc81984f50d283d56fec33e8fc739fb Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.462115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.464046 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.964033401 +0000 UTC m=+154.790596222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.567346 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.567471 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.067413639 +0000 UTC m=+154.893976460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.567880 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.568461 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.068444768 +0000 UTC m=+154.895007599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.598659 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" event={"ID":"1030ed14-9fc1-4ec9-a93c-13eab69320ae","Type":"ContainerStarted","Data":"437ea022ca695dd3c8be1cbb1b44f690df361a980e7c2eb2985b0f8b38dc9e0c"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.603056 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f2550561034c8e4a230b7ed2e9f23ce77cc81984f50d283d56fec33e8fc739fb"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.617369 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" event={"ID":"45774b89-be22-4692-a944-e5f12f898ea6","Type":"ContainerStarted","Data":"9cc1040bc4b4050cbdb18298dfc9be5cbfe8a3a8c66606d5d752ec3f98391b2f"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.619055 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" event={"ID":"cae1c988-06ab-4748-a62d-5bd7301b2c8d","Type":"ContainerStarted","Data":"d717b3927ce83af8ba73330be9f868092fe0fdbdd83aacdbcf2ed308742ebd23"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.619567 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.620914 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.620955 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.623033 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" event={"ID":"ebf5f82e-2a14-49d9-b670-59ed73e71203","Type":"ContainerStarted","Data":"ff73d967f8fb248341974b5e406a44622b69b6de9e5df338a4adc2449181764b"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.630602 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" event={"ID":"0e4812cb-3dc4-4d34-b24d-fd5f4a507030","Type":"ContainerStarted","Data":"b39419bdde15412964a2e3b95d2b8b203bd3bb7d0354865d148cf0f708038435"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.632640 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ce07fa2ab23a4f3ece8649aaa467e9290f0006aaf0a7b738024af734b6dbeefc"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.635203 4985 generic.go:334] "Generic (PLEG): container finished" podID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerID="feb43603996825516cfc092bf0fad2145b414d6c0a264d5677b6af0e016c8ef8" exitCode=0 Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.635290 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" event={"ID":"715ad1e8-6659-4a18-a007-ad31ffa7044e","Type":"ContainerDied","Data":"feb43603996825516cfc092bf0fad2145b414d6c0a264d5677b6af0e016c8ef8"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.639031 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" event={"ID":"218b57d8-c3a3-4a33-a3ef-6701cf557911","Type":"ContainerStarted","Data":"46ef78aa78108a5a2180e0a31160ecd7bbfc8ab0e641d68cb257650ad6901d56"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.644993 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" podStartSLOduration=43.644946982 podStartE2EDuration="43.644946982s" podCreationTimestamp="2026-01-28 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.626311983 +0000 UTC m=+154.452874824" watchObservedRunningTime="2026-01-28 18:15:43.644946982 +0000 UTC m=+154.471509803" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.646336 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" event={"ID":"a7ecd4c5-97bd-4190-b474-a745b00d58aa","Type":"ContainerStarted","Data":"d82ec03a2421dbac9721060c554073a3ffc5995669ac840d112278ea87825a43"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.646387 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" event={"ID":"a7ecd4c5-97bd-4190-b474-a745b00d58aa","Type":"ContainerStarted","Data":"2e3181f1a5918f1e10191a34e028952377350610670581a1468cf7388fd18edb"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.649446 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" event={"ID":"5691988c-c881-437e-aa60-317e424b3170","Type":"ContainerStarted","Data":"f75673dbae32a425735282fafc61e6dc472bef448e5d322e633bf53e1f982b2d"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.651779 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" event={"ID":"893bf4c0-7b07-4e49-bff4-9ed7d52b3196","Type":"ContainerStarted","Data":"4bccb1fb1259c25912a8a652d5313efd046c3b3be158159b0b3bf4e137dc501b"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.651812 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" event={"ID":"893bf4c0-7b07-4e49-bff4-9ed7d52b3196","Type":"ContainerStarted","Data":"e0a9377ebc7932896bd107c05096556a55cb6e4df29babf79f8a822b2c002a23"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.652201 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.653975 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" event={"ID":"bf0cd343-6643-4463-bb9b-6e291a601901","Type":"ContainerStarted","Data":"045d50ee895655138412a42045d807578fd287fb32fee5d3d7edf4034654b0ff"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.658716 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" podStartSLOduration=123.658695383 podStartE2EDuration="2m3.658695383s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.657555771 +0000 UTC m=+154.484118592" watchObservedRunningTime="2026-01-28 18:15:43.658695383 +0000 UTC m=+154.485258204" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.659821 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podStartSLOduration=123.659813555 podStartE2EDuration="2m3.659813555s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.64207148 +0000 UTC m=+154.468634321" watchObservedRunningTime="2026-01-28 18:15:43.659813555 +0000 UTC m=+154.486376376" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.660476 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" event={"ID":"70124ff4-00b0-41ef-947d-55eda7af02db","Type":"ContainerStarted","Data":"6af011f55a64374575ea0cae6d33d823b0facc6e20d048b8a1587919c0634929"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.660722 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.665618 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-g5knd" event={"ID":"97299e5b-e1d8-41b0-b1b2-c5658f42a436","Type":"ContainerStarted","Data":"0dd1582aa5163e30675732fbf375bc84e847b0cf1b41e9dc1a0a941d81828fcf"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.668338 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.669931 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.670187 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.671178 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" event={"ID":"3c194d09-8a64-45a1-b40b-d1ea249b2626","Type":"ContainerStarted","Data":"19b1edc012d998b55e4fde5b82a097fa2178564028adc22c43436a3488ef2d92"} Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.671333 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.171309961 +0000 UTC m=+154.997872792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.672613 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"45002a6a2c7138d9b42aef2ed0bd03e5dd1f62156eb66981aa82bf8098a68b3a"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.676165 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" event={"ID":"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c","Type":"ContainerStarted","Data":"b91f562174ffab8488433ee9f5d4dbeb69c2bd5a5a2200d215b875a40eae0c2e"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.685902 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" event={"ID":"0953ef82-fce5-4008-85c8-b1377a8f66a2","Type":"ContainerStarted","Data":"f5e186a2088ec4f860d3b8cb51c2f4190f8a2eabf5677599790b35a7acf2f350"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.697138 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" podStartSLOduration=123.697085344 podStartE2EDuration="2m3.697085344s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.693570484 +0000 UTC m=+154.520133315" watchObservedRunningTime="2026-01-28 18:15:43.697085344 +0000 UTC m=+154.523648165" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.704476 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" event={"ID":"d3e3ff22-4547-453f-bd6a-bf8d4098f3a3","Type":"ContainerStarted","Data":"2a9f81657487a25f347bd15085f723ec9c4d54b203cce27b61d1672aa094702f"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.736736 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" event={"ID":"fa42b50c-59ed-4523-a6a0-994a72ff7071","Type":"ContainerStarted","Data":"f5ff21eae212661230e0f400cfd444bde35cb9b2316c59ec3f7a4c7fa2274b70"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.737449 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.739609 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.739670 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.749615 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j6799" event={"ID":"db632812-bc0d-41f2-9c01-a19d40eb69be","Type":"ContainerStarted","Data":"08a0795107d17d55b403752643a479ee0f629b233d8b8ff0a9ced0a20942f05d"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.751394 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.756971 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.757242 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.760629 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" podStartSLOduration=123.760599609 podStartE2EDuration="2m3.760599609s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.755226336 +0000 UTC m=+154.581789157" watchObservedRunningTime="2026-01-28 18:15:43.760599609 +0000 UTC m=+154.587162430" Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.771469 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.271455867 +0000 UTC m=+155.098018688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.771137 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.764078 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" event={"ID":"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc","Type":"ContainerStarted","Data":"232dc9c5ae53b35e6fec0e884895b5f8becaf78b490dcf66e0050a584b979043"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.794439 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" event={"ID":"07d9a024-6342-42ba-8a0b-4db3aa777a82","Type":"ContainerStarted","Data":"53ba5ca3f8b3acb1f5e25c0476efc9564c22718b5e9b28fb5ad08e152e9984a9"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.824213 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-g5knd" podStartSLOduration=6.824164785 podStartE2EDuration="6.824164785s" podCreationTimestamp="2026-01-28 18:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.792575988 +0000 UTC m=+154.619138819" watchObservedRunningTime="2026-01-28 18:15:43.824164785 +0000 UTC m=+154.650727606" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.829962 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" event={"ID":"7b3b0534-3356-446a-91e8-dae980c402db","Type":"ContainerStarted","Data":"f64a1d12ad75e551f76bff45fa2c92285d9866a9c62ac072c671399e4e78b8f6"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.831183 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.834114 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" podStartSLOduration=123.834087657 podStartE2EDuration="2m3.834087657s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.82959542 +0000 UTC m=+154.656158241" watchObservedRunningTime="2026-01-28 18:15:43.834087657 +0000 UTC m=+154.660650668" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.865805 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b5wzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.865877 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.875223 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.875616 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fn9d5" event={"ID":"365a9e45-74e9-4231-8ccf-c5fbf200ab83","Type":"ContainerStarted","Data":"08ad21707accc3f834748fd9d507769b137d814002f52158f67638eaab59faa3"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.876453 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.876769 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.376753 +0000 UTC m=+155.203315821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.877322 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podStartSLOduration=123.877308766 podStartE2EDuration="2m3.877308766s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.875850584 +0000 UTC m=+154.702413415" watchObservedRunningTime="2026-01-28 18:15:43.877308766 +0000 UTC m=+154.703871587" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.890398 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" event={"ID":"fa6948a7-6763-4c03-b6f9-ecfb38a8a064","Type":"ContainerStarted","Data":"9852d8ac758b7698e1f7ea6bc02cb4d86b83e3ec735ef920e1d541945c84e9e5"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.893701 4985 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fdfqq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.893745 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.893782 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xqdzz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.893863 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.893929 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.893958 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.903397 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.903461 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.904383 4985 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-52cvd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.904448 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.921880 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" podStartSLOduration=123.921849491 podStartE2EDuration="2m3.921849491s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.908620085 +0000 UTC m=+154.735182896" watchObservedRunningTime="2026-01-28 18:15:43.921849491 +0000 UTC m=+154.748412312" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.972768 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" podStartSLOduration=123.972735978 podStartE2EDuration="2m3.972735978s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.937744433 +0000 UTC m=+154.764307254" watchObservedRunningTime="2026-01-28 18:15:43.972735978 +0000 UTC m=+154.799298799" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.973480 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" podStartSLOduration=123.973469978 podStartE2EDuration="2m3.973469978s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.968526868 +0000 UTC m=+154.795089689" watchObservedRunningTime="2026-01-28 18:15:43.973469978 +0000 UTC m=+154.800032799" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.977215 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.003810 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" podStartSLOduration=124.00378414 podStartE2EDuration="2m4.00378414s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.001142665 +0000 UTC m=+154.827705496" watchObservedRunningTime="2026-01-28 18:15:44.00378414 +0000 UTC m=+154.830346961" Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.007849 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.507829915 +0000 UTC m=+155.334392736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.042929 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qnrsp" podStartSLOduration=124.042906332 podStartE2EDuration="2m4.042906332s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.041486981 +0000 UTC m=+154.868049802" watchObservedRunningTime="2026-01-28 18:15:44.042906332 +0000 UTC m=+154.869469153" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.078822 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.079223 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.579206103 +0000 UTC m=+155.405768924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.107636 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" podStartSLOduration=124.10760767 podStartE2EDuration="2m4.10760767s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.096542136 +0000 UTC m=+154.923104947" watchObservedRunningTime="2026-01-28 18:15:44.10760767 +0000 UTC m=+154.934170491" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.119678 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podStartSLOduration=124.119656093 podStartE2EDuration="2m4.119656093s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.119404626 +0000 UTC m=+154.945967467" watchObservedRunningTime="2026-01-28 18:15:44.119656093 +0000 UTC m=+154.946218914" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.151102 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" podStartSLOduration=124.151077226 podStartE2EDuration="2m4.151077226s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.149429859 +0000 UTC m=+154.975992680" watchObservedRunningTime="2026-01-28 18:15:44.151077226 +0000 UTC m=+154.977640047" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.181058 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.181691 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.681674155 +0000 UTC m=+155.508236976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.203804 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" podStartSLOduration=124.203782923 podStartE2EDuration="2m4.203782923s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.199562343 +0000 UTC m=+155.026125174" watchObservedRunningTime="2026-01-28 18:15:44.203782923 +0000 UTC m=+155.030345744" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.258066 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" podStartSLOduration=124.258042465 podStartE2EDuration="2m4.258042465s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.255531514 +0000 UTC m=+155.082094335" watchObservedRunningTime="2026-01-28 18:15:44.258042465 +0000 UTC m=+155.084605286" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.284823 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.285222 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.785206307 +0000 UTC m=+155.611769128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.344927 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-j6799" podStartSLOduration=124.344908884 podStartE2EDuration="2m4.344908884s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.302571071 +0000 UTC m=+155.129133892" watchObservedRunningTime="2026-01-28 18:15:44.344908884 +0000 UTC m=+155.171471705" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.345293 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" podStartSLOduration=124.345290475 podStartE2EDuration="2m4.345290475s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.342682601 +0000 UTC m=+155.169245422" watchObservedRunningTime="2026-01-28 18:15:44.345290475 +0000 UTC m=+155.171853296" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.376182 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-fn9d5" podStartSLOduration=7.376153862 podStartE2EDuration="7.376153862s" podCreationTimestamp="2026-01-28 18:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.372847008 +0000 UTC m=+155.199409829" watchObservedRunningTime="2026-01-28 18:15:44.376153862 +0000 UTC m=+155.202716683" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.387048 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.387696 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.887677799 +0000 UTC m=+155.714240620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.398080 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2lzzr" podStartSLOduration=7.398051274 podStartE2EDuration="7.398051274s" podCreationTimestamp="2026-01-28 18:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.39086082 +0000 UTC m=+155.217423651" watchObservedRunningTime="2026-01-28 18:15:44.398051274 +0000 UTC m=+155.224614095" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.448614 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" podStartSLOduration=124.44858874 podStartE2EDuration="2m4.44858874s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.443891197 +0000 UTC m=+155.270454028" watchObservedRunningTime="2026-01-28 18:15:44.44858874 +0000 UTC m=+155.275151561" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.450707 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" podStartSLOduration=124.45069867 podStartE2EDuration="2m4.45069867s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.422439957 +0000 UTC m=+155.249002778" watchObservedRunningTime="2026-01-28 18:15:44.45069867 +0000 UTC m=+155.277261491" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.489815 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.490212 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.990194433 +0000 UTC m=+155.816757244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.498912 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" podStartSLOduration=124.49888737 podStartE2EDuration="2m4.49888737s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.498464208 +0000 UTC m=+155.325027039" watchObservedRunningTime="2026-01-28 18:15:44.49888737 +0000 UTC m=+155.325450191" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.524352 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hpz9q" podStartSLOduration=124.524329703 podStartE2EDuration="2m4.524329703s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.52389106 +0000 UTC m=+155.350453881" watchObservedRunningTime="2026-01-28 18:15:44.524329703 +0000 UTC m=+155.350892534" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.591779 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.592234 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.092218502 +0000 UTC m=+155.918781323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.617267 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.618486 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.618535 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.693033 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.693600 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.193563752 +0000 UTC m=+156.020126573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.795014 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.795476 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.295456118 +0000 UTC m=+156.122018939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.896157 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.896609 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.396595022 +0000 UTC m=+156.223157843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.898085 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cfde496ca4baaeceb8a817a29a2696a5661461cf557694ecd9171c6c50943829"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.901128 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" event={"ID":"bf0cd343-6643-4463-bb9b-6e291a601901","Type":"ContainerStarted","Data":"f982907c80716b41b2268550bbb2daa5e64386dd6432f8608c172c4226928c37"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.903580 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" event={"ID":"3c194d09-8a64-45a1-b40b-d1ea249b2626","Type":"ContainerStarted","Data":"37e117138c941f0cebea21f7f5b8b3c3deec93036ed86c7b058c4b0b27ff8bc6"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.906733 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"05f111c74c9500c86fafc215a536173dfc3b7fa58cd6b2b982164a7fd7c3d8ea"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.906910 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.908991 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fn9d5" event={"ID":"365a9e45-74e9-4231-8ccf-c5fbf200ab83","Type":"ContainerStarted","Data":"d2c9f47132c3973975eadd76bbe8f6211b7751c6743abf694805a05f9404eacb"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.911040 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" event={"ID":"45774b89-be22-4692-a944-e5f12f898ea6","Type":"ContainerStarted","Data":"1b56c61f29869e6d115cb72d69bb76bf27b5b3ec3c86dee45d55a082afb8edfe"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.912518 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c23e9ca62fecc1eaeec6e46012eb54880b96d777b6c5e6f65e1279af6067c6ed"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.915436 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" event={"ID":"ebf5f82e-2a14-49d9-b670-59ed73e71203","Type":"ContainerStarted","Data":"632767b61ba7c6fe31c83bd2e9588921f2a06fd37cdf52d071e99c26ec9f8357"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.918117 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" event={"ID":"715ad1e8-6659-4a18-a007-ad31ffa7044e","Type":"ContainerStarted","Data":"9ff56c9523f5bafd270d42d2d854367fe80b33c8d2f772d856a6ab4876f1fa48"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.918556 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.920271 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" event={"ID":"d3e3ff22-4547-453f-bd6a-bf8d4098f3a3","Type":"ContainerStarted","Data":"970030d2427f110d447404f6fef91f4110a1a65dcf9b743c75b91570cf0933d3"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.922113 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" event={"ID":"c731b198-314f-46a9-ad13-a4cc6c7bab94","Type":"ContainerStarted","Data":"e7798f4962eade42046a64293003b8e80cca5c5b2a0672f8d559d427a29ec3d0"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.924647 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" event={"ID":"a3b95c03-1b0d-4c06-bb85-2f9ed127737b","Type":"ContainerStarted","Data":"961aaa261c9d6ac69a1bf08ecd14fc941c76adcc2cce7c9fd3a34201dd2adc4f"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.936843 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" event={"ID":"fa6948a7-6763-4c03-b6f9-ecfb38a8a064","Type":"ContainerStarted","Data":"65fdc16f968f16491c13b13b383d45b1496d97698761eb8019fd722bab5c5e95"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.937279 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.937331 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.937722 4985 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-52cvd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.937747 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.938071 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.938108 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.938393 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.938558 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.938463 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b5wzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.938776 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.939046 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.939074 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.939484 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.939512 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.971748 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podStartSLOduration=124.971728297 podStartE2EDuration="2m4.971728297s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.969290788 +0000 UTC m=+155.795853619" watchObservedRunningTime="2026-01-28 18:15:44.971728297 +0000 UTC m=+155.798291118" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.996723 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" podStartSLOduration=124.996699987 podStartE2EDuration="2m4.996699987s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.996188702 +0000 UTC m=+155.822751523" watchObservedRunningTime="2026-01-28 18:15:44.996699987 +0000 UTC m=+155.823262808" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:44.998910 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.002171 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.502155842 +0000 UTC m=+156.328718673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.039166 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" podStartSLOduration=125.039130843 podStartE2EDuration="2m5.039130843s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.029750346 +0000 UTC m=+155.856313167" watchObservedRunningTime="2026-01-28 18:15:45.039130843 +0000 UTC m=+155.865693664" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.070153 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" podStartSLOduration=125.070134094 podStartE2EDuration="2m5.070134094s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.068535448 +0000 UTC m=+155.895098289" watchObservedRunningTime="2026-01-28 18:15:45.070134094 +0000 UTC m=+155.896696925" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.112614 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.112895 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.612881489 +0000 UTC m=+156.439444310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.164101 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" podStartSLOduration=125.164075533 podStartE2EDuration="2m5.164075533s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.111631793 +0000 UTC m=+155.938194614" watchObservedRunningTime="2026-01-28 18:15:45.164075533 +0000 UTC m=+155.990638344" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.205866 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" podStartSLOduration=125.20584169 podStartE2EDuration="2m5.20584169s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.197772771 +0000 UTC m=+156.024335592" watchObservedRunningTime="2026-01-28 18:15:45.20584169 +0000 UTC m=+156.032404531" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.213963 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.214450 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.714431034 +0000 UTC m=+156.540993855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.249041 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" podStartSLOduration=125.249014147 podStartE2EDuration="2m5.249014147s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.247618968 +0000 UTC m=+156.074181789" watchObservedRunningTime="2026-01-28 18:15:45.249014147 +0000 UTC m=+156.075576968" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.294935 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" podStartSLOduration=126.294902801 podStartE2EDuration="2m6.294902801s" podCreationTimestamp="2026-01-28 18:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.289977431 +0000 UTC m=+156.116540262" watchObservedRunningTime="2026-01-28 18:15:45.294902801 +0000 UTC m=+156.121465622" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.315280 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.315449 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.815424674 +0000 UTC m=+156.641987505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.315558 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.315917 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.815906508 +0000 UTC m=+156.642469339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.351730 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" podStartSLOduration=125.351696245 podStartE2EDuration="2m5.351696245s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.346539669 +0000 UTC m=+156.173102510" watchObservedRunningTime="2026-01-28 18:15:45.351696245 +0000 UTC m=+156.178259066" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.417045 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.417215 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.917191096 +0000 UTC m=+156.743753927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.417330 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.417697 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.917686711 +0000 UTC m=+156.744249532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.518397 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.518623 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.018586428 +0000 UTC m=+156.845149249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.518977 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.519376 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.01936125 +0000 UTC m=+156.845924071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.620344 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.620523 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.120481884 +0000 UTC m=+156.947044705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.620594 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.621138 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.121114432 +0000 UTC m=+156.947677323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.625845 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:45 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:45 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:45 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.625914 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.721735 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.721991 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.221962178 +0000 UTC m=+157.048524989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.722045 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.722436 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.222421781 +0000 UTC m=+157.048984602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.823389 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.823579 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.323549364 +0000 UTC m=+157.150112185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.823911 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.824341 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.324332106 +0000 UTC m=+157.150894927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.925732 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.925964 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.425934293 +0000 UTC m=+157.252497114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.926277 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.926621 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.426613162 +0000 UTC m=+157.253175983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.944351 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"eedf56963284f4f02b309064398b6a7be6c00026bb391ec849a54c864758f409"} Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.944848 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b5wzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.944889 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.945379 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.945421 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.027381 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.028617 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.52857708 +0000 UTC m=+157.355139891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.129559 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.130033 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.630012763 +0000 UTC m=+157.456575584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.230941 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.231120 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.731100985 +0000 UTC m=+157.557663796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.231216 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.231573 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.731563699 +0000 UTC m=+157.558126520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.332042 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.332339 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.832298221 +0000 UTC m=+157.658861042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.332384 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.332690 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.832676622 +0000 UTC m=+157.659239443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.433428 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.433568 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.933545639 +0000 UTC m=+157.760108460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.433688 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.434052 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.934043873 +0000 UTC m=+157.760606694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.434177 4985 csr.go:261] certificate signing request csr-hfr5g is approved, waiting to be issued Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.448547 4985 csr.go:257] certificate signing request csr-hfr5g is issued Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.534585 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.534807 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.034776605 +0000 UTC m=+157.861339426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.534873 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.535265 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.035236468 +0000 UTC m=+157.861799279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.621401 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:46 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:46 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:46 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.621470 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.635644 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.635885 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.135841387 +0000 UTC m=+157.962404208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.635976 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.636394 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.136386393 +0000 UTC m=+157.962949214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.737493 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.737692 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.237662371 +0000 UTC m=+158.064225202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.737840 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.738215 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.238197186 +0000 UTC m=+158.064760007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.839365 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.839874 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.339845245 +0000 UTC m=+158.166408066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.941496 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.941894 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.441880205 +0000 UTC m=+158.268443026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.042649 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.042936 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.542888725 +0000 UTC m=+158.369451556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.043128 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.043573 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.543558164 +0000 UTC m=+158.370120985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.144379 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.144815 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.644800191 +0000 UTC m=+158.471363012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.246282 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.246661 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.746644425 +0000 UTC m=+158.573207246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.347365 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.347791 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.847768819 +0000 UTC m=+158.674331640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.352395 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-58qq5"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.353545 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.356853 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.449082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.449545 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99vxj\" (UniqueName: \"kubernetes.io/projected/ee77ca55-8cd0-4401-afec-9817fee5f6bb-kube-api-access-99vxj\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.449635 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.949603093 +0000 UTC m=+158.776165914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.449663 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 18:10:46 +0000 UTC, rotation deadline is 2026-11-03 02:00:33.068015721 +0000 UTC Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.449721 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6679h44m45.618297975s for next certificate rotation Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.449703 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-catalog-content\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.449903 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-utilities\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.473923 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58qq5"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.540955 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nbllw"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.542162 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.551836 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.551865 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.552090 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99vxj\" (UniqueName: \"kubernetes.io/projected/ee77ca55-8cd0-4401-afec-9817fee5f6bb-kube-api-access-99vxj\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.552129 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-catalog-content\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.552193 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.052172198 +0000 UTC m=+158.878735019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.552334 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-utilities\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.552606 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-catalog-content\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.552934 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-utilities\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.578862 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbllw"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.600047 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99vxj\" (UniqueName: \"kubernetes.io/projected/ee77ca55-8cd0-4401-afec-9817fee5f6bb-kube-api-access-99vxj\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.623786 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:47 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:47 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:47 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.625421 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.653718 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-catalog-content\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.654088 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.654243 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzrfx\" (UniqueName: \"kubernetes.io/projected/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-kube-api-access-rzrfx\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.654352 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-utilities\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.654493 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.154472545 +0000 UTC m=+158.981035366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.658394 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.659066 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.662370 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.662675 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.667033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.684092 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.745846 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ngcsk"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.747134 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.755965 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.756192 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5593b8be-de94-4ed3-81cb-449457767772-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.756243 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzrfx\" (UniqueName: \"kubernetes.io/projected/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-kube-api-access-rzrfx\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.756282 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-utilities\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.756310 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-catalog-content\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.756392 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.256354901 +0000 UTC m=+159.082917872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.756773 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-catalog-content\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.757334 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-utilities\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.757638 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.257625597 +0000 UTC m=+159.084188418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.759038 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.759164 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5593b8be-de94-4ed3-81cb-449457767772-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.795011 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzrfx\" (UniqueName: \"kubernetes.io/projected/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-kube-api-access-rzrfx\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.824332 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ngcsk"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.856840 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.860396 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.860869 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glps2\" (UniqueName: \"kubernetes.io/projected/ff1a5336-5c99-49fa-bb89-311781866770-kube-api-access-glps2\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.860939 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-catalog-content\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.860993 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-utilities\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.861061 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5593b8be-de94-4ed3-81cb-449457767772-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.861117 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5593b8be-de94-4ed3-81cb-449457767772-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.861692 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.361667063 +0000 UTC m=+159.188229884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.861778 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5593b8be-de94-4ed3-81cb-449457767772-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.940093 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5593b8be-de94-4ed3-81cb-449457767772-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.960497 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tkbjb"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.964102 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-utilities\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.964151 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.964234 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glps2\" (UniqueName: \"kubernetes.io/projected/ff1a5336-5c99-49fa-bb89-311781866770-kube-api-access-glps2\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.964284 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-catalog-content\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.964930 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.464908167 +0000 UTC m=+159.291470988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.965238 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-catalog-content\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.966122 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-utilities\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.974488 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.979060 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.032579 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkbjb"] Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.042773 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"806885dc798ad388908373bc69cdee91b5601deeb01836e72ab0bfaaa4c37352"} Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.066687 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.066922 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj4fx\" (UniqueName: \"kubernetes.io/projected/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-kube-api-access-kj4fx\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.066956 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-catalog-content\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.067012 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-utilities\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.067184 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.567167273 +0000 UTC m=+159.393730094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.070696 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glps2\" (UniqueName: \"kubernetes.io/projected/ff1a5336-5c99-49fa-bb89-311781866770-kube-api-access-glps2\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.085633 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.173642 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-utilities\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.173730 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj4fx\" (UniqueName: \"kubernetes.io/projected/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-kube-api-access-kj4fx\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.173754 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.173773 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-catalog-content\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.174222 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-catalog-content\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.174460 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-utilities\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.175050 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.675033919 +0000 UTC m=+159.501596750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.233980 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj4fx\" (UniqueName: \"kubernetes.io/projected/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-kube-api-access-kj4fx\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.274699 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.275095 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.775071602 +0000 UTC m=+159.601634423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.318791 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58qq5"] Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.351597 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.378364 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.378805 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.878790339 +0000 UTC m=+159.705353160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.399456 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.484832 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.486476 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.986458209 +0000 UTC m=+159.813021030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.595961 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.596389 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.096372732 +0000 UTC m=+159.922935553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.626518 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbllw"] Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.634448 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:48 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:48 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:48 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.634536 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.698894 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.699325 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.199308078 +0000 UTC m=+160.025870899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.803149 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.803593 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.303575661 +0000 UTC m=+160.130138482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.909197 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.909601 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.409577403 +0000 UTC m=+160.236140224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.927438 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.964223 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.964797 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.970775 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.999539 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.013335 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.013689 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.513677011 +0000 UTC m=+160.340239832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.034199 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.094367 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ngcsk"] Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.125442 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5593b8be-de94-4ed3-81cb-449457767772","Type":"ContainerStarted","Data":"03a87c1436d3238d93dfc27faef0f425b055e15e52fd95499db1893c39fae51c"} Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.131335 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.133018 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.632994412 +0000 UTC m=+160.459557233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.170199 4985 generic.go:334] "Generic (PLEG): container finished" podID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerID="f89df29bdb5f4a1ac1d8a46bc1cdba1d48b8e3013145698fb6cdebd84b29470e" exitCode=0 Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.170309 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58qq5" event={"ID":"ee77ca55-8cd0-4401-afec-9817fee5f6bb","Type":"ContainerDied","Data":"f89df29bdb5f4a1ac1d8a46bc1cdba1d48b8e3013145698fb6cdebd84b29470e"} Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.170341 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58qq5" event={"ID":"ee77ca55-8cd0-4401-afec-9817fee5f6bb","Type":"ContainerStarted","Data":"29cf66044b42b3771161b4b736214738baedd3db9a4eab25aec806dff09290a6"} Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.190217 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.200909 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerStarted","Data":"fee5ad9c634324fb795c0ec18b20b982cec13ce8646e5a41d3259fd33ab8724c"} Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.218799 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.225374 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.234835 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.235403 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.735386492 +0000 UTC m=+160.561949313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.235866 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.235899 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.244830 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.253506 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkbjb"] Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.255733 4985 patch_prober.go:28] interesting pod/console-f9d7485db-b5t5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.255787 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b5t5k" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.337381 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.337635 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.837586875 +0000 UTC m=+160.664149696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.338123 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.341203 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.841180988 +0000 UTC m=+160.667743809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.348199 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.418983 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.419044 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.419315 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.419397 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.438912 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.440499 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.94048264 +0000 UTC m=+160.767045461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.539854 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mkflh"] Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.541294 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.542105 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.542541 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:50.04252979 +0000 UTC m=+160.869092611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.560801 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.563584 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkflh"] Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.628778 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:49 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:49 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:49 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.628848 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.645900 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.646205 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-utilities\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.646241 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89h9m\" (UniqueName: \"kubernetes.io/projected/d797afdd-19c6-45ed-81c8-5fa31175e121-kube-api-access-89h9m\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.646342 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-catalog-content\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.646471 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:50.146449373 +0000 UTC m=+160.973012194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.747318 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-catalog-content\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.747850 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.747871 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-utilities\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.747897 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89h9m\" (UniqueName: \"kubernetes.io/projected/d797afdd-19c6-45ed-81c8-5fa31175e121-kube-api-access-89h9m\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.749322 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-catalog-content\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.749676 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:50.249660456 +0000 UTC m=+161.076223277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.749911 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-utilities\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.782737 4985 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.783513 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89h9m\" (UniqueName: \"kubernetes.io/projected/d797afdd-19c6-45ed-81c8-5fa31175e121-kube-api-access-89h9m\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.851898 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.852470 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:50.352453147 +0000 UTC m=+161.179015968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.927626 4985 patch_prober.go:28] interesting pod/apiserver-76f77b778f-2wxf2 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]log ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]etcd ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/max-in-flight-filter ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 28 18:15:49 crc kubenswrapper[4985]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 28 18:15:49 crc kubenswrapper[4985]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/project.openshift.io-projectcache ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/openshift.io-startinformers ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 18:15:49 crc kubenswrapper[4985]: livez check failed Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.927698 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" podUID="ebf5f82e-2a14-49d9-b670-59ed73e71203" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.928002 4985 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T18:15:49.782777017Z","Handler":null,"Name":""} Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.928447 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vq448"] Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.929633 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.942703 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vq448"] Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.944175 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.944916 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.954312 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.954774 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:50.454759534 +0000 UTC m=+161.281322355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.005599 4985 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.005636 4985 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.056409 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.056747 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-catalog-content\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.056820 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d86ls\" (UniqueName: \"kubernetes.io/projected/bebbf794-5459-4a75-bff1-92b7551d4784-kube-api-access-d86ls\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.056923 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-utilities\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.065931 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.159055 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-catalog-content\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.159155 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d86ls\" (UniqueName: \"kubernetes.io/projected/bebbf794-5459-4a75-bff1-92b7551d4784-kube-api-access-d86ls\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.159197 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.159239 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-utilities\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.159889 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-utilities\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.160171 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-catalog-content\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.200588 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d86ls\" (UniqueName: \"kubernetes.io/projected/bebbf794-5459-4a75-bff1-92b7551d4784-kube-api-access-d86ls\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.216709 4985 generic.go:334] "Generic (PLEG): container finished" podID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerID="5959b03d9788b40f0a702f2c357697b3ecb07a0cda1a9c0b368fd63267cd0bea" exitCode=0 Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.216971 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerDied","Data":"5959b03d9788b40f0a702f2c357697b3ecb07a0cda1a9c0b368fd63267cd0bea"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.222073 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerID="6fbcabfceffdf85763f4008a949c3b5ecf075282566d7602a9169724a8470662" exitCode=0 Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.222427 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkbjb" event={"ID":"4bec6c8f-9678-463c-9e09-5b8e362f2f1b","Type":"ContainerDied","Data":"6fbcabfceffdf85763f4008a949c3b5ecf075282566d7602a9169724a8470662"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.222462 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkbjb" event={"ID":"4bec6c8f-9678-463c-9e09-5b8e362f2f1b","Type":"ContainerStarted","Data":"7de4f851d6fd3b3bdf2435ffb6090fbd2d50bbda34ffd7c0a08f88549a7af86b"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.233070 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5593b8be-de94-4ed3-81cb-449457767772","Type":"ContainerStarted","Data":"b8a8d74d6582f05ce9c27631887a13eeb3a1cc783db0fd72172a5370c7d0843d"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.253228 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.261281 4985 generic.go:334] "Generic (PLEG): container finished" podID="ff1a5336-5c99-49fa-bb89-311781866770" containerID="081b66f566faa6677cfda3978e83d93b4dce7e5760fe6c65c107d2c177beeb71" exitCode=0 Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.261415 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngcsk" event={"ID":"ff1a5336-5c99-49fa-bb89-311781866770","Type":"ContainerDied","Data":"081b66f566faa6677cfda3978e83d93b4dce7e5760fe6c65c107d2c177beeb71"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.261456 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngcsk" event={"ID":"ff1a5336-5c99-49fa-bb89-311781866770","Type":"ContainerStarted","Data":"443d55c2efdfe0f8e6f7fa0e88bf057b626e08f470a93af561b93e9387fb0988"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.282399 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"82bed0d8a42bca7e53b39c9544bdc0936cdb44ffd82eeecb67a51d1676f725c4"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.282444 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"670c758c6e0b4d061db4a1652fe94536b8c4f9f8219d2776bceabf3e6e3134da"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.315193 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.315280 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.344704 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.344687856 podStartE2EDuration="3.344687856s" podCreationTimestamp="2026-01-28 18:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:50.343272035 +0000 UTC m=+161.169834856" watchObservedRunningTime="2026-01-28 18:15:50.344687856 +0000 UTC m=+161.171250677" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.345191 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.370421 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkflh"] Jan 28 18:15:50 crc kubenswrapper[4985]: W0128 18:15:50.388889 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd797afdd_19c6_45ed_81c8_5fa31175e121.slice/crio-b846c4733fcd4ae67ec3f2920b60c675130ebbfa81d38792b482dedce235cc4c WatchSource:0}: Error finding container b846c4733fcd4ae67ec3f2920b60c675130ebbfa81d38792b482dedce235cc4c: Status 404 returned error can't find the container with id b846c4733fcd4ae67ec3f2920b60c675130ebbfa81d38792b482dedce235cc4c Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.430945 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podStartSLOduration=13.430911726 podStartE2EDuration="13.430911726s" podCreationTimestamp="2026-01-28 18:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:50.397098745 +0000 UTC m=+161.223661566" watchObservedRunningTime="2026-01-28 18:15:50.430911726 +0000 UTC m=+161.257474547" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.483899 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.529920 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zcwgk"] Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.531051 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.534986 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.546312 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcwgk"] Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.581932 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.600480 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.620652 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.628659 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:50 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:50 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:50 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.628739 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.644605 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.669865 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vq448"] Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.685607 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.687110 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-catalog-content\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.687235 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4jc\" (UniqueName: \"kubernetes.io/projected/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-kube-api-access-gn4jc\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.687272 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-utilities\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.788699 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-catalog-content\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.788778 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-utilities\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.788795 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn4jc\" (UniqueName: \"kubernetes.io/projected/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-kube-api-access-gn4jc\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.789741 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-catalog-content\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.789990 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-utilities\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.822499 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn4jc\" (UniqueName: \"kubernetes.io/projected/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-kube-api-access-gn4jc\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.887646 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.936583 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2zfzc"] Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.938054 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.961070 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zfzc"] Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.021696 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.024047 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.036371 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.036682 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.065272 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.096190 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-utilities\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.096282 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-catalog-content\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.096357 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpdsv\" (UniqueName: \"kubernetes.io/projected/478dee72-717a-448e-b14d-15d600c82eb5-kube-api-access-wpdsv\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.198510 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7c01a9f-20e3-411e-b7da-d21be45aba82-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.198562 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpdsv\" (UniqueName: \"kubernetes.io/projected/478dee72-717a-448e-b14d-15d600c82eb5-kube-api-access-wpdsv\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.199080 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-utilities\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.199114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-catalog-content\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.199138 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7c01a9f-20e3-411e-b7da-d21be45aba82-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.200238 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-utilities\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.203974 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-catalog-content\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.224492 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4k6qp"] Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.226394 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpdsv\" (UniqueName: \"kubernetes.io/projected/478dee72-717a-448e-b14d-15d600c82eb5-kube-api-access-wpdsv\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.273466 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.286438 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.305343 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7c01a9f-20e3-411e-b7da-d21be45aba82-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.305435 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7c01a9f-20e3-411e-b7da-d21be45aba82-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.305550 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7c01a9f-20e3-411e-b7da-d21be45aba82-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.314234 4985 generic.go:334] "Generic (PLEG): container finished" podID="5593b8be-de94-4ed3-81cb-449457767772" containerID="b8a8d74d6582f05ce9c27631887a13eeb3a1cc783db0fd72172a5370c7d0843d" exitCode=0 Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.314528 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5593b8be-de94-4ed3-81cb-449457767772","Type":"ContainerDied","Data":"b8a8d74d6582f05ce9c27631887a13eeb3a1cc783db0fd72172a5370c7d0843d"} Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.317649 4985 generic.go:334] "Generic (PLEG): container finished" podID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerID="1c1dfa1718d5bb120e659769c80766e3c5cedbd440f581ae9a47ced34819aecd" exitCode=0 Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.317694 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkflh" event={"ID":"d797afdd-19c6-45ed-81c8-5fa31175e121","Type":"ContainerDied","Data":"1c1dfa1718d5bb120e659769c80766e3c5cedbd440f581ae9a47ced34819aecd"} Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.317713 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkflh" event={"ID":"d797afdd-19c6-45ed-81c8-5fa31175e121","Type":"ContainerStarted","Data":"b846c4733fcd4ae67ec3f2920b60c675130ebbfa81d38792b482dedce235cc4c"} Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.320590 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" event={"ID":"23852c5a-64eb-4a56-8fbb-2e91b16a8429","Type":"ContainerStarted","Data":"718f56cadfa73ec9c883cb72f3a4ad761b62779dbd38dd0559a00a1f1b0a3abc"} Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.323047 4985 generic.go:334] "Generic (PLEG): container finished" podID="bebbf794-5459-4a75-bff1-92b7551d4784" containerID="e42228c4ddd411e6182ff6bcd41d0e27a2e8b74487dc7087bd1ccdb69c1e91bf" exitCode=0 Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.323682 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vq448" event={"ID":"bebbf794-5459-4a75-bff1-92b7551d4784","Type":"ContainerDied","Data":"e42228c4ddd411e6182ff6bcd41d0e27a2e8b74487dc7087bd1ccdb69c1e91bf"} Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.323714 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vq448" event={"ID":"bebbf794-5459-4a75-bff1-92b7551d4784","Type":"ContainerStarted","Data":"4227c1ef4517986db5b63f69f417525b1efc3dddfa056b58023dfaf2602681c9"} Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.326862 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7c01a9f-20e3-411e-b7da-d21be45aba82-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.331189 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcwgk"] Jan 28 18:15:51 crc kubenswrapper[4985]: W0128 18:15:51.362351 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf17410ee_fc07_4e6c_8262_d3dad9ca4a5d.slice/crio-2a41be352376fbadb1f7291b4affc279d9d298821bb817d8661c11256745bd0d WatchSource:0}: Error finding container 2a41be352376fbadb1f7291b4affc279d9d298821bb817d8661c11256745bd0d: Status 404 returned error can't find the container with id 2a41be352376fbadb1f7291b4affc279d9d298821bb817d8661c11256745bd0d Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.373088 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.614488 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zfzc"] Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.625595 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:51 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:51 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:51 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.625662 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:51 crc kubenswrapper[4985]: W0128 18:15:51.662831 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod478dee72_717a_448e_b14d_15d600c82eb5.slice/crio-687d51d9587f9c808e73f6dce3d7fb729d7c957935ab306ab4a9c9ab274f7f6f WatchSource:0}: Error finding container 687d51d9587f9c808e73f6dce3d7fb729d7c957935ab306ab4a9c9ab274f7f6f: Status 404 returned error can't find the container with id 687d51d9587f9c808e73f6dce3d7fb729d7c957935ab306ab4a9c9ab274f7f6f Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.927554 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.334421 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a7c01a9f-20e3-411e-b7da-d21be45aba82","Type":"ContainerStarted","Data":"a12184f6c2a48cfdc9dbfa4c6e29637c2b0a033211e9e57f5e3cd9fc0e34bfa4"} Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.339375 4985 generic.go:334] "Generic (PLEG): container finished" podID="478dee72-717a-448e-b14d-15d600c82eb5" containerID="5673793a26abba26b8f6d32fd5a5358bd49bc89bef0867e3813c049e8ce5af23" exitCode=0 Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.339497 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerDied","Data":"5673793a26abba26b8f6d32fd5a5358bd49bc89bef0867e3813c049e8ce5af23"} Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.339637 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerStarted","Data":"687d51d9587f9c808e73f6dce3d7fb729d7c957935ab306ab4a9c9ab274f7f6f"} Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.344794 4985 generic.go:334] "Generic (PLEG): container finished" podID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerID="232f8967da98b027f9bf4b5329e389ea4efabb6b13f4e9043541624ffe8ba02b" exitCode=0 Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.346442 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerDied","Data":"232f8967da98b027f9bf4b5329e389ea4efabb6b13f4e9043541624ffe8ba02b"} Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.346583 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerStarted","Data":"2a41be352376fbadb1f7291b4affc279d9d298821bb817d8661c11256745bd0d"} Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.353120 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" event={"ID":"23852c5a-64eb-4a56-8fbb-2e91b16a8429","Type":"ContainerStarted","Data":"2385b533945171f57d477a41059659216495ddfbdd0280843de749e41c577829"} Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.353562 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.397714 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" podStartSLOduration=132.397667227 podStartE2EDuration="2m12.397667227s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:52.390680989 +0000 UTC m=+163.217243810" watchObservedRunningTime="2026-01-28 18:15:52.397667227 +0000 UTC m=+163.224230048" Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.610628 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.618799 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:52 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:52 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:52 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.618869 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.823370 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.951456 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5593b8be-de94-4ed3-81cb-449457767772-kube-api-access\") pod \"5593b8be-de94-4ed3-81cb-449457767772\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.951721 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5593b8be-de94-4ed3-81cb-449457767772-kubelet-dir\") pod \"5593b8be-de94-4ed3-81cb-449457767772\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.952223 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5593b8be-de94-4ed3-81cb-449457767772-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5593b8be-de94-4ed3-81cb-449457767772" (UID: "5593b8be-de94-4ed3-81cb-449457767772"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.977941 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5593b8be-de94-4ed3-81cb-449457767772-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5593b8be-de94-4ed3-81cb-449457767772" (UID: "5593b8be-de94-4ed3-81cb-449457767772"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.053781 4985 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5593b8be-de94-4ed3-81cb-449457767772-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.053837 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5593b8be-de94-4ed3-81cb-449457767772-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.404172 4985 generic.go:334] "Generic (PLEG): container finished" podID="a7c01a9f-20e3-411e-b7da-d21be45aba82" containerID="c0b6373de32d25637f399a6feae262091a19d13a816cfb3455bbb1c28479e246" exitCode=0 Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.404284 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a7c01a9f-20e3-411e-b7da-d21be45aba82","Type":"ContainerDied","Data":"c0b6373de32d25637f399a6feae262091a19d13a816cfb3455bbb1c28479e246"} Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.445061 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5593b8be-de94-4ed3-81cb-449457767772","Type":"ContainerDied","Data":"03a87c1436d3238d93dfc27faef0f425b055e15e52fd95499db1893c39fae51c"} Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.445119 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03a87c1436d3238d93dfc27faef0f425b055e15e52fd95499db1893c39fae51c" Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.445236 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.451106 4985 generic.go:334] "Generic (PLEG): container finished" podID="1030ed14-9fc1-4ec9-a93c-13eab69320ae" containerID="437ea022ca695dd3c8be1cbb1b44f690df361a980e7c2eb2985b0f8b38dc9e0c" exitCode=0 Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.451866 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" event={"ID":"1030ed14-9fc1-4ec9-a93c-13eab69320ae","Type":"ContainerDied","Data":"437ea022ca695dd3c8be1cbb1b44f690df361a980e7c2eb2985b0f8b38dc9e0c"} Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.640787 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:53 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:53 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:53 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.640859 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.242123 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.248217 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.617770 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:54 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:54 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:54 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.617871 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.837504 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.892949 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7c01a9f-20e3-411e-b7da-d21be45aba82-kubelet-dir\") pod \"a7c01a9f-20e3-411e-b7da-d21be45aba82\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.893074 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7c01a9f-20e3-411e-b7da-d21be45aba82-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a7c01a9f-20e3-411e-b7da-d21be45aba82" (UID: "a7c01a9f-20e3-411e-b7da-d21be45aba82"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.893196 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7c01a9f-20e3-411e-b7da-d21be45aba82-kube-api-access\") pod \"a7c01a9f-20e3-411e-b7da-d21be45aba82\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.893567 4985 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7c01a9f-20e3-411e-b7da-d21be45aba82-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.902455 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c01a9f-20e3-411e-b7da-d21be45aba82-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a7c01a9f-20e3-411e-b7da-d21be45aba82" (UID: "a7c01a9f-20e3-411e-b7da-d21be45aba82"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.904745 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.994502 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1030ed14-9fc1-4ec9-a93c-13eab69320ae-config-volume\") pod \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.995297 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2d88\" (UniqueName: \"kubernetes.io/projected/1030ed14-9fc1-4ec9-a93c-13eab69320ae-kube-api-access-p2d88\") pod \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.995432 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1030ed14-9fc1-4ec9-a93c-13eab69320ae-secret-volume\") pod \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.995865 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7c01a9f-20e3-411e-b7da-d21be45aba82-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.996445 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1030ed14-9fc1-4ec9-a93c-13eab69320ae-config-volume" (OuterVolumeSpecName: "config-volume") pod "1030ed14-9fc1-4ec9-a93c-13eab69320ae" (UID: "1030ed14-9fc1-4ec9-a93c-13eab69320ae"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.001456 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1030ed14-9fc1-4ec9-a93c-13eab69320ae-kube-api-access-p2d88" (OuterVolumeSpecName: "kube-api-access-p2d88") pod "1030ed14-9fc1-4ec9-a93c-13eab69320ae" (UID: "1030ed14-9fc1-4ec9-a93c-13eab69320ae"). InnerVolumeSpecName "kube-api-access-p2d88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.001819 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1030ed14-9fc1-4ec9-a93c-13eab69320ae-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1030ed14-9fc1-4ec9-a93c-13eab69320ae" (UID: "1030ed14-9fc1-4ec9-a93c-13eab69320ae"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.096997 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2d88\" (UniqueName: \"kubernetes.io/projected/1030ed14-9fc1-4ec9-a93c-13eab69320ae-kube-api-access-p2d88\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.097038 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1030ed14-9fc1-4ec9-a93c-13eab69320ae-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.097048 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1030ed14-9fc1-4ec9-a93c-13eab69320ae-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.525612 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a7c01a9f-20e3-411e-b7da-d21be45aba82","Type":"ContainerDied","Data":"a12184f6c2a48cfdc9dbfa4c6e29637c2b0a033211e9e57f5e3cd9fc0e34bfa4"} Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.526510 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.526891 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12184f6c2a48cfdc9dbfa4c6e29637c2b0a033211e9e57f5e3cd9fc0e34bfa4" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.535587 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" event={"ID":"1030ed14-9fc1-4ec9-a93c-13eab69320ae","Type":"ContainerDied","Data":"8f93ab89ce3c6adab00c97ddb3618e2ccd297812e80918e595461de298f590fd"} Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.535652 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.535666 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f93ab89ce3c6adab00c97ddb3618e2ccd297812e80918e595461de298f590fd" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.618946 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:55 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:55 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:55 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.619114 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:56 crc kubenswrapper[4985]: I0128 18:15:56.620287 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:56 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:56 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:56 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:56 crc kubenswrapper[4985]: I0128 18:15:56.620373 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:57 crc kubenswrapper[4985]: I0128 18:15:57.619007 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:57 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:57 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:57 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:57 crc kubenswrapper[4985]: I0128 18:15:57.619419 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:58 crc kubenswrapper[4985]: I0128 18:15:58.618935 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:58 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:58 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:58 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:58 crc kubenswrapper[4985]: I0128 18:15:58.619025 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.217882 4985 patch_prober.go:28] interesting pod/console-f9d7485db-b5t5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.217966 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b5t5k" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.417667 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.417761 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.417666 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.417829 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.657656 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:59 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:59 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:59 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.657738 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:16:00 crc kubenswrapper[4985]: I0128 18:16:00.618596 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:16:00 crc kubenswrapper[4985]: I0128 18:16:00.621235 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:16:02 crc kubenswrapper[4985]: I0128 18:16:02.758034 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:16:02 crc kubenswrapper[4985]: I0128 18:16:02.764807 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:16:02 crc kubenswrapper[4985]: I0128 18:16:02.985545 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:16:05 crc kubenswrapper[4985]: I0128 18:16:05.580802 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52cvd"] Jan 28 18:16:05 crc kubenswrapper[4985]: I0128 18:16:05.581051 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" containerID="cri-o://c6ab429d720c37e702d53f4e9a0f44ef39cfc027fff063215df4736dace96d76" gracePeriod=30 Jan 28 18:16:05 crc kubenswrapper[4985]: I0128 18:16:05.593288 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz"] Jan 28 18:16:05 crc kubenswrapper[4985]: I0128 18:16:05.593531 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" containerID="cri-o://d7be33ff5b68db551839a7b0619faeeabeb41fe748eb7a18f2e5916375270548" gracePeriod=30 Jan 28 18:16:06 crc kubenswrapper[4985]: I0128 18:16:06.653658 4985 generic.go:334] "Generic (PLEG): container finished" podID="81ef78af-dc11-4231-9693-eb088718d103" containerID="c6ab429d720c37e702d53f4e9a0f44ef39cfc027fff063215df4736dace96d76" exitCode=0 Jan 28 18:16:06 crc kubenswrapper[4985]: I0128 18:16:06.653775 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" event={"ID":"81ef78af-dc11-4231-9693-eb088718d103","Type":"ContainerDied","Data":"c6ab429d720c37e702d53f4e9a0f44ef39cfc027fff063215df4736dace96d76"} Jan 28 18:16:06 crc kubenswrapper[4985]: I0128 18:16:06.656046 4985 generic.go:334] "Generic (PLEG): container finished" podID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerID="d7be33ff5b68db551839a7b0619faeeabeb41fe748eb7a18f2e5916375270548" exitCode=0 Jan 28 18:16:06 crc kubenswrapper[4985]: I0128 18:16:06.656085 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" event={"ID":"44d556c9-6c8e-45d3-bec8-303081e8c4e1","Type":"ContainerDied","Data":"d7be33ff5b68db551839a7b0619faeeabeb41fe748eb7a18f2e5916375270548"} Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.228406 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.235520 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.418654 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.418749 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.420120 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.420197 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.420270 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.421068 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.421148 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.421328 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"996f5a4f85f66ed4a659b1f3b977d305f1391958d42cde202ba973eed4ede77b"} pod="openshift-console/downloads-7954f5f757-hpz9q" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.421454 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" containerID="cri-o://996f5a4f85f66ed4a659b1f3b977d305f1391958d42cde202ba973eed4ede77b" gracePeriod=2 Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.975799 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xqdzz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.975884 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:16:10 crc kubenswrapper[4985]: I0128 18:16:10.315826 4985 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-52cvd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:16:10 crc kubenswrapper[4985]: I0128 18:16:10.315980 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:16:10 crc kubenswrapper[4985]: I0128 18:16:10.683164 4985 generic.go:334] "Generic (PLEG): container finished" podID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerID="996f5a4f85f66ed4a659b1f3b977d305f1391958d42cde202ba973eed4ede77b" exitCode=0 Jan 28 18:16:10 crc kubenswrapper[4985]: I0128 18:16:10.683226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hpz9q" event={"ID":"25061ce4-ca31-4da7-ad36-c6535e1d2028","Type":"ContainerDied","Data":"996f5a4f85f66ed4a659b1f3b977d305f1391958d42cde202ba973eed4ede77b"} Jan 28 18:16:10 crc kubenswrapper[4985]: I0128 18:16:10.692562 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:16:11 crc kubenswrapper[4985]: I0128 18:16:11.189539 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:16:11 crc kubenswrapper[4985]: I0128 18:16:11.189897 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:16:19 crc kubenswrapper[4985]: I0128 18:16:19.419190 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:19 crc kubenswrapper[4985]: I0128 18:16:19.420060 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:19 crc kubenswrapper[4985]: I0128 18:16:19.976721 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xqdzz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:16:19 crc kubenswrapper[4985]: I0128 18:16:19.977232 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:16:20 crc kubenswrapper[4985]: I0128 18:16:20.315547 4985 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-52cvd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:16:20 crc kubenswrapper[4985]: I0128 18:16:20.315655 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:16:20 crc kubenswrapper[4985]: I0128 18:16:20.556058 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:16:22 crc kubenswrapper[4985]: I0128 18:16:22.498503 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:16:24 crc kubenswrapper[4985]: E0128 18:16:24.213337 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 18:16:24 crc kubenswrapper[4985]: E0128 18:16:24.213878 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rzrfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nbllw_openshift-marketplace(b3c2ecc0-c6a6-468b-bdcf-e84c2831a580): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:24 crc kubenswrapper[4985]: E0128 18:16:24.215111 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-nbllw" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.367869 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 18:16:28 crc kubenswrapper[4985]: E0128 18:16:28.368541 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1030ed14-9fc1-4ec9-a93c-13eab69320ae" containerName="collect-profiles" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.368563 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1030ed14-9fc1-4ec9-a93c-13eab69320ae" containerName="collect-profiles" Jan 28 18:16:28 crc kubenswrapper[4985]: E0128 18:16:28.368584 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5593b8be-de94-4ed3-81cb-449457767772" containerName="pruner" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.368598 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="5593b8be-de94-4ed3-81cb-449457767772" containerName="pruner" Jan 28 18:16:28 crc kubenswrapper[4985]: E0128 18:16:28.368625 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c01a9f-20e3-411e-b7da-d21be45aba82" containerName="pruner" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.368638 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c01a9f-20e3-411e-b7da-d21be45aba82" containerName="pruner" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.368852 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="5593b8be-de94-4ed3-81cb-449457767772" containerName="pruner" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.368888 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c01a9f-20e3-411e-b7da-d21be45aba82" containerName="pruner" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.370149 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1030ed14-9fc1-4ec9-a93c-13eab69320ae" containerName="collect-profiles" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.371104 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.379324 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.381188 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.381575 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 18:16:28 crc kubenswrapper[4985]: E0128 18:16:28.399887 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nbllw" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.467799 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.467891 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.471543 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.501062 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5869bdf574-ch68d"] Jan 28 18:16:28 crc kubenswrapper[4985]: E0128 18:16:28.501370 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.501385 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.501504 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.501893 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.518450 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5869bdf574-ch68d"] Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.569619 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfnlm\" (UniqueName: \"kubernetes.io/projected/81ef78af-dc11-4231-9693-eb088718d103-kube-api-access-rfnlm\") pod \"81ef78af-dc11-4231-9693-eb088718d103\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.569998 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-proxy-ca-bundles\") pod \"81ef78af-dc11-4231-9693-eb088718d103\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.570200 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-client-ca\") pod \"81ef78af-dc11-4231-9693-eb088718d103\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.570433 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-config\") pod \"81ef78af-dc11-4231-9693-eb088718d103\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.570543 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ef78af-dc11-4231-9693-eb088718d103-serving-cert\") pod \"81ef78af-dc11-4231-9693-eb088718d103\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.570850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-client-ca\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.570986 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c548c555-f5c2-4b49-83f4-ba501eb53a19-serving-cert\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.570864 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-client-ca" (OuterVolumeSpecName: "client-ca") pod "81ef78af-dc11-4231-9693-eb088718d103" (UID: "81ef78af-dc11-4231-9693-eb088718d103"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571105 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "81ef78af-dc11-4231-9693-eb088718d103" (UID: "81ef78af-dc11-4231-9693-eb088718d103"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571207 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-config" (OuterVolumeSpecName: "config") pod "81ef78af-dc11-4231-9693-eb088718d103" (UID: "81ef78af-dc11-4231-9693-eb088718d103"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571368 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-config\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571528 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-proxy-ca-bundles\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571603 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571656 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571726 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw442\" (UniqueName: \"kubernetes.io/projected/c548c555-f5c2-4b49-83f4-ba501eb53a19-kube-api-access-fw442\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571828 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571840 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571854 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571904 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.587606 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ef78af-dc11-4231-9693-eb088718d103-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "81ef78af-dc11-4231-9693-eb088718d103" (UID: "81ef78af-dc11-4231-9693-eb088718d103"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.589766 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ef78af-dc11-4231-9693-eb088718d103-kube-api-access-rfnlm" (OuterVolumeSpecName: "kube-api-access-rfnlm") pod "81ef78af-dc11-4231-9693-eb088718d103" (UID: "81ef78af-dc11-4231-9693-eb088718d103"). InnerVolumeSpecName "kube-api-access-rfnlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.589871 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.673729 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw442\" (UniqueName: \"kubernetes.io/projected/c548c555-f5c2-4b49-83f4-ba501eb53a19-kube-api-access-fw442\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.674194 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-client-ca\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.674385 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c548c555-f5c2-4b49-83f4-ba501eb53a19-serving-cert\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.674527 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-config\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.674706 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-proxy-ca-bundles\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.674877 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfnlm\" (UniqueName: \"kubernetes.io/projected/81ef78af-dc11-4231-9693-eb088718d103-kube-api-access-rfnlm\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.674992 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ef78af-dc11-4231-9693-eb088718d103-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.675315 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-client-ca\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.677642 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-proxy-ca-bundles\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.680236 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c548c555-f5c2-4b49-83f4-ba501eb53a19-serving-cert\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.693224 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw442\" (UniqueName: \"kubernetes.io/projected/c548c555-f5c2-4b49-83f4-ba501eb53a19-kube-api-access-fw442\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.767240 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.806281 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" event={"ID":"81ef78af-dc11-4231-9693-eb088718d103","Type":"ContainerDied","Data":"6aa4b8f2068d7c22817241bf474ef76faf5c50ef5705a0334899bfa519f7cac2"} Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.806364 4985 scope.go:117] "RemoveContainer" containerID="c6ab429d720c37e702d53f4e9a0f44ef39cfc027fff063215df4736dace96d76" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.806378 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.852081 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52cvd"] Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.855030 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52cvd"] Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.247015 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-config\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.276805 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ef78af-dc11-4231-9693-eb088718d103" path="/var/lib/kubelet/pods/81ef78af-dc11-4231-9693-eb088718d103/volumes" Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.417927 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.418026 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.441736 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.976289 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xqdzz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.976383 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.161028 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.162982 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.209486 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.335558 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kube-api-access\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.335632 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.335654 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-var-lock\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.437035 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kube-api-access\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.437150 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.437199 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-var-lock\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.437296 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.437413 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-var-lock\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.458054 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kube-api-access\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.514546 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.697217 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.740930 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc"] Jan 28 18:16:36 crc kubenswrapper[4985]: E0128 18:16:36.741239 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.741280 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.741433 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.742084 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.752557 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc"] Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.788147 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-client-ca\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.788606 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-config\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.788943 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a55227-f583-4f77-845f-9938b41aad05-serving-cert\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.789067 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfgxb\" (UniqueName: \"kubernetes.io/projected/c9a55227-f583-4f77-845f-9938b41aad05-kube-api-access-gfgxb\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.852117 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" event={"ID":"44d556c9-6c8e-45d3-bec8-303081e8c4e1","Type":"ContainerDied","Data":"0e823a46854aa252fe9015e01e9cddb6f75ae7ba4ce62f7d7338ee347ff378f1"} Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.852430 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.889829 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-client-ca\") pod \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.890473 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-config\") pod \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.890854 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6t9q\" (UniqueName: \"kubernetes.io/projected/44d556c9-6c8e-45d3-bec8-303081e8c4e1-kube-api-access-d6t9q\") pod \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.891147 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d556c9-6c8e-45d3-bec8-303081e8c4e1-serving-cert\") pod \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.891615 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-client-ca" (OuterVolumeSpecName: "client-ca") pod "44d556c9-6c8e-45d3-bec8-303081e8c4e1" (UID: "44d556c9-6c8e-45d3-bec8-303081e8c4e1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.891645 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-client-ca\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.891835 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-config\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.891952 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a55227-f583-4f77-845f-9938b41aad05-serving-cert\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.892002 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfgxb\" (UniqueName: \"kubernetes.io/projected/c9a55227-f583-4f77-845f-9938b41aad05-kube-api-access-gfgxb\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.892077 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-config" (OuterVolumeSpecName: "config") pod "44d556c9-6c8e-45d3-bec8-303081e8c4e1" (UID: "44d556c9-6c8e-45d3-bec8-303081e8c4e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.892426 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.894275 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-client-ca\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.895117 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-config\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.897774 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44d556c9-6c8e-45d3-bec8-303081e8c4e1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "44d556c9-6c8e-45d3-bec8-303081e8c4e1" (UID: "44d556c9-6c8e-45d3-bec8-303081e8c4e1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.897948 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d556c9-6c8e-45d3-bec8-303081e8c4e1-kube-api-access-d6t9q" (OuterVolumeSpecName: "kube-api-access-d6t9q") pod "44d556c9-6c8e-45d3-bec8-303081e8c4e1" (UID: "44d556c9-6c8e-45d3-bec8-303081e8c4e1"). InnerVolumeSpecName "kube-api-access-d6t9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.898063 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a55227-f583-4f77-845f-9938b41aad05-serving-cert\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.918119 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfgxb\" (UniqueName: \"kubernetes.io/projected/c9a55227-f583-4f77-845f-9938b41aad05-kube-api-access-gfgxb\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.994344 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.994388 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6t9q\" (UniqueName: \"kubernetes.io/projected/44d556c9-6c8e-45d3-bec8-303081e8c4e1-kube-api-access-d6t9q\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.994402 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d556c9-6c8e-45d3-bec8-303081e8c4e1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:37 crc kubenswrapper[4985]: I0128 18:16:37.079241 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:37 crc kubenswrapper[4985]: I0128 18:16:37.192573 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz"] Jan 28 18:16:37 crc kubenswrapper[4985]: I0128 18:16:37.198224 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz"] Jan 28 18:16:37 crc kubenswrapper[4985]: I0128 18:16:37.272766 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" path="/var/lib/kubelet/pods/44d556c9-6c8e-45d3-bec8-303081e8c4e1/volumes" Jan 28 18:16:37 crc kubenswrapper[4985]: E0128 18:16:37.619588 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 18:16:37 crc kubenswrapper[4985]: E0128 18:16:37.619801 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99vxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-58qq5_openshift-marketplace(ee77ca55-8cd0-4401-afec-9817fee5f6bb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:37 crc kubenswrapper[4985]: E0128 18:16:37.620979 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-58qq5" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.537914 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-58qq5" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.658645 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.658891 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d86ls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vq448_openshift-marketplace(bebbf794-5459-4a75-bff1-92b7551d4784): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.660019 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vq448" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.685221 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.685469 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89h9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mkflh_openshift-marketplace(d797afdd-19c6-45ed-81c8-5fa31175e121): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.686723 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mkflh" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.805773 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.805957 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glps2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ngcsk_openshift-marketplace(ff1a5336-5c99-49fa-bb89-311781866770): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.807525 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ngcsk" podUID="ff1a5336-5c99-49fa-bb89-311781866770" Jan 28 18:16:39 crc kubenswrapper[4985]: I0128 18:16:39.419297 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:39 crc kubenswrapper[4985]: I0128 18:16:39.419665 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.185644 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.185726 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.185794 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.186717 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.186812 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa" gracePeriod=600 Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.882349 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa" exitCode=0 Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.882438 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa"} Jan 28 18:16:42 crc kubenswrapper[4985]: E0128 18:16:42.743376 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mkflh" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" Jan 28 18:16:42 crc kubenswrapper[4985]: E0128 18:16:42.743395 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ngcsk" podUID="ff1a5336-5c99-49fa-bb89-311781866770" Jan 28 18:16:42 crc kubenswrapper[4985]: E0128 18:16:42.743622 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vq448" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" Jan 28 18:16:42 crc kubenswrapper[4985]: I0128 18:16:42.808185 4985 scope.go:117] "RemoveContainer" containerID="d7be33ff5b68db551839a7b0619faeeabeb41fe748eb7a18f2e5916375270548" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.050412 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.051403 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kj4fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tkbjb_openshift-marketplace(4bec6c8f-9678-463c-9e09-5b8e362f2f1b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.052601 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tkbjb" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.068231 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.068423 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpdsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2zfzc_openshift-marketplace(478dee72-717a-448e-b14d-15d600c82eb5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.069725 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2zfzc" podUID="478dee72-717a-448e-b14d-15d600c82eb5" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.071528 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.071637 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gn4jc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zcwgk_openshift-marketplace(f17410ee-fc07-4e6c-8262-d3dad9ca4a5d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.072954 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zcwgk" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.257386 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 18:16:43 crc kubenswrapper[4985]: W0128 18:16:43.273442 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod490ef8c2_c2f7_4661_9016_d6bbadb543ff.slice/crio-cabe12c11673a1180890f6f0d6d87300c980b07016e69a08e8dbb956bdd4b0b0 WatchSource:0}: Error finding container cabe12c11673a1180890f6f0d6d87300c980b07016e69a08e8dbb956bdd4b0b0: Status 404 returned error can't find the container with id cabe12c11673a1180890f6f0d6d87300c980b07016e69a08e8dbb956bdd4b0b0 Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.333338 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc"] Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.343217 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.345089 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hrd6k"] Jan 28 18:16:43 crc kubenswrapper[4985]: W0128 18:16:43.374170 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9a55227_f583_4f77_845f_9938b41aad05.slice/crio-0c1baf91463f290c3d892cb40e61b3d124856adc600caf5a5be88ecc069eded5 WatchSource:0}: Error finding container 0c1baf91463f290c3d892cb40e61b3d124856adc600caf5a5be88ecc069eded5: Status 404 returned error can't find the container with id 0c1baf91463f290c3d892cb40e61b3d124856adc600caf5a5be88ecc069eded5 Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.433123 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5869bdf574-ch68d"] Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.918612 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerStarted","Data":"ea88d0096240b8b1ce3a53612acc27a9069f84f2e4c034995d9d80ba5534c382"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.920470 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"490ef8c2-c2f7-4661-9016-d6bbadb543ff","Type":"ContainerStarted","Data":"a8bc81de07eb444f8f7f3c331821e8845288261d63d60d28d416c8c297b87410"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.920532 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"490ef8c2-c2f7-4661-9016-d6bbadb543ff","Type":"ContainerStarted","Data":"cabe12c11673a1180890f6f0d6d87300c980b07016e69a08e8dbb956bdd4b0b0"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.921830 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" event={"ID":"c9a55227-f583-4f77-845f-9938b41aad05","Type":"ContainerStarted","Data":"230a32e1704bbf1bfdb865092f83b3a4dcbb6f3d1684e2401748ed37926d4bea"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.921859 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" event={"ID":"c9a55227-f583-4f77-845f-9938b41aad05","Type":"ContainerStarted","Data":"0c1baf91463f290c3d892cb40e61b3d124856adc600caf5a5be88ecc069eded5"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.922133 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.930081 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hpz9q" event={"ID":"25061ce4-ca31-4da7-ad36-c6535e1d2028","Type":"ContainerStarted","Data":"27a6a768d0f7cda3a9be6469f427962f23d0f54576c2de064e4cfba387aa0006"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.930300 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.930722 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.930789 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.932185 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" event={"ID":"c548c555-f5c2-4b49-83f4-ba501eb53a19","Type":"ContainerStarted","Data":"aca47457e78cbdad7584b3f87da1ee68b51f7fcffc325c44756fb3b2a97df8ce"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.932212 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" event={"ID":"c548c555-f5c2-4b49-83f4-ba501eb53a19","Type":"ContainerStarted","Data":"ad2adfb876654b6fefd1ea75de1738cfc3935a2a867a3438609617e943e0d7b9"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.933218 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.934434 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" event={"ID":"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0","Type":"ContainerStarted","Data":"3bcc15c49ad319492bfc3a7313c76d11980f9fb5262fe5586f8704dea7732913"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.934461 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" event={"ID":"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0","Type":"ContainerStarted","Data":"a75d2e51bc33c85d8fb48bc8f8ff0c7277c0877f520a52b18651a6d98a4378c5"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.938498 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"593af0e54c9d9c5d6a1c9d6b82650336d416f9c59d7bd7f797ef21c62cc91daf"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.940385 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9","Type":"ContainerStarted","Data":"3c8ef3ffe3a3beb101ee44bb4477a152e2c2c1d60d8d32877bb5661a8b94361c"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.940454 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9","Type":"ContainerStarted","Data":"f249e6a9045822ac8356aabfe2373c714fcb3fec9f0635e367520cd44059c81b"} Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.941724 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zcwgk" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.943369 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tkbjb" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.943784 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2zfzc" podUID="478dee72-717a-448e-b14d-15d600c82eb5" Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.946707 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.013214 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" podStartSLOduration=19.0131859 podStartE2EDuration="19.0131859s" podCreationTimestamp="2026-01-28 18:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:16:43.983048008 +0000 UTC m=+214.809610829" watchObservedRunningTime="2026-01-28 18:16:44.0131859 +0000 UTC m=+214.839748721" Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.112512 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=16.112487 podStartE2EDuration="16.112487s" podCreationTimestamp="2026-01-28 18:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:16:44.078987372 +0000 UTC m=+214.905550193" watchObservedRunningTime="2026-01-28 18:16:44.112487 +0000 UTC m=+214.939049821" Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.112759 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" podStartSLOduration=19.112755467 podStartE2EDuration="19.112755467s" podCreationTimestamp="2026-01-28 18:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:16:44.109774572 +0000 UTC m=+214.936337393" watchObservedRunningTime="2026-01-28 18:16:44.112755467 +0000 UTC m=+214.939318288" Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.202452 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.202428792 podStartE2EDuration="11.202428792s" podCreationTimestamp="2026-01-28 18:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:16:44.170877849 +0000 UTC m=+214.997440690" watchObservedRunningTime="2026-01-28 18:16:44.202428792 +0000 UTC m=+215.028991623" Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.241544 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.951779 4985 generic.go:334] "Generic (PLEG): container finished" podID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerID="ea88d0096240b8b1ce3a53612acc27a9069f84f2e4c034995d9d80ba5534c382" exitCode=0 Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.951860 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerDied","Data":"ea88d0096240b8b1ce3a53612acc27a9069f84f2e4c034995d9d80ba5534c382"} Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.957010 4985 generic.go:334] "Generic (PLEG): container finished" podID="490ef8c2-c2f7-4661-9016-d6bbadb543ff" containerID="a8bc81de07eb444f8f7f3c331821e8845288261d63d60d28d416c8c297b87410" exitCode=0 Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.957188 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"490ef8c2-c2f7-4661-9016-d6bbadb543ff","Type":"ContainerDied","Data":"a8bc81de07eb444f8f7f3c331821e8845288261d63d60d28d416c8c297b87410"} Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.959352 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" event={"ID":"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0","Type":"ContainerStarted","Data":"c49fe4bca42d080f2e058ce4f25686140f849c2dbe753d51cc784e4e644223a4"} Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.960299 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.960391 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:45 crc kubenswrapper[4985]: I0128 18:16:45.970804 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerStarted","Data":"30ed9426cff32dd29f42b6c27b0db2bc04b4bceebc9ee807228b14314c6b1d45"} Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.305785 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.328490 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hrd6k" podStartSLOduration=186.328459675 podStartE2EDuration="3m6.328459675s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:16:45.01547413 +0000 UTC m=+215.842036971" watchObservedRunningTime="2026-01-28 18:16:46.328459675 +0000 UTC m=+217.155022496" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.470791 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kube-api-access\") pod \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.470922 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kubelet-dir\") pod \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.471084 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "490ef8c2-c2f7-4661-9016-d6bbadb543ff" (UID: "490ef8c2-c2f7-4661-9016-d6bbadb543ff"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.471676 4985 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.481447 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "490ef8c2-c2f7-4661-9016-d6bbadb543ff" (UID: "490ef8c2-c2f7-4661-9016-d6bbadb543ff"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.573767 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.977895 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"490ef8c2-c2f7-4661-9016-d6bbadb543ff","Type":"ContainerDied","Data":"cabe12c11673a1180890f6f0d6d87300c980b07016e69a08e8dbb956bdd4b0b0"} Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.978341 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cabe12c11673a1180890f6f0d6d87300c980b07016e69a08e8dbb956bdd4b0b0" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.977957 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:47 crc kubenswrapper[4985]: I0128 18:16:47.006985 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nbllw" podStartSLOduration=4.813574879 podStartE2EDuration="1m0.006961865s" podCreationTimestamp="2026-01-28 18:15:47 +0000 UTC" firstStartedPulling="2026-01-28 18:15:50.219350054 +0000 UTC m=+161.045912875" lastFinishedPulling="2026-01-28 18:16:45.41273704 +0000 UTC m=+216.239299861" observedRunningTime="2026-01-28 18:16:47.001857819 +0000 UTC m=+217.828420650" watchObservedRunningTime="2026-01-28 18:16:47.006961865 +0000 UTC m=+217.833524676" Jan 28 18:16:47 crc kubenswrapper[4985]: I0128 18:16:47.858473 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:16:47 crc kubenswrapper[4985]: I0128 18:16:47.858897 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:16:49 crc kubenswrapper[4985]: I0128 18:16:49.433496 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:16:49 crc kubenswrapper[4985]: I0128 18:16:49.449559 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nbllw" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="registry-server" probeResult="failure" output=< Jan 28 18:16:49 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:16:49 crc kubenswrapper[4985]: > Jan 28 18:16:58 crc kubenswrapper[4985]: I0128 18:16:58.096587 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:16:58 crc kubenswrapper[4985]: I0128 18:16:58.144056 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.069385 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerStarted","Data":"c6a6370de55c9f1d322d443a680768dd95b5a50ccc8cfbead3f597f6cb81b47b"} Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.071193 4985 generic.go:334] "Generic (PLEG): container finished" podID="ff1a5336-5c99-49fa-bb89-311781866770" containerID="3b65c4cdfefa99481aa1051361932ec6ad9c250e75289c86b535f66431840968" exitCode=0 Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.071240 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngcsk" event={"ID":"ff1a5336-5c99-49fa-bb89-311781866770","Type":"ContainerDied","Data":"3b65c4cdfefa99481aa1051361932ec6ad9c250e75289c86b535f66431840968"} Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.074820 4985 generic.go:334] "Generic (PLEG): container finished" podID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerID="08c2afc11e237eab84a8f7dfaa5b0598297222c01564bf4921e004a1b405af84" exitCode=0 Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.075078 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkflh" event={"ID":"d797afdd-19c6-45ed-81c8-5fa31175e121","Type":"ContainerDied","Data":"08c2afc11e237eab84a8f7dfaa5b0598297222c01564bf4921e004a1b405af84"} Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.081693 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerStarted","Data":"82b69880adf61999e4575782c5ecaafe22c81d0a0e17bab967aa245eeb683a6c"} Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.083960 4985 generic.go:334] "Generic (PLEG): container finished" podID="bebbf794-5459-4a75-bff1-92b7551d4784" containerID="c3c7c834b59dec9afe12ae5cb4e24ce5d7fb7d283ff22d3d168e71ce368d578d" exitCode=0 Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.084010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vq448" event={"ID":"bebbf794-5459-4a75-bff1-92b7551d4784","Type":"ContainerDied","Data":"c3c7c834b59dec9afe12ae5cb4e24ce5d7fb7d283ff22d3d168e71ce368d578d"} Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.086203 4985 generic.go:334] "Generic (PLEG): container finished" podID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerID="5ae5d10976e7c26eb6213f430d17c638f8547abe24f44e7063a7dba954835ef4" exitCode=0 Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.086264 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58qq5" event={"ID":"ee77ca55-8cd0-4401-afec-9817fee5f6bb","Type":"ContainerDied","Data":"5ae5d10976e7c26eb6213f430d17c638f8547abe24f44e7063a7dba954835ef4"} Jan 28 18:17:03 crc kubenswrapper[4985]: I0128 18:17:03.093860 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerID="f66d90e90c24d7eaca4eeddb8684aee625dffff1f85b1b4fa72af4b5c206bbee" exitCode=0 Jan 28 18:17:03 crc kubenswrapper[4985]: I0128 18:17:03.094343 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkbjb" event={"ID":"4bec6c8f-9678-463c-9e09-5b8e362f2f1b","Type":"ContainerDied","Data":"f66d90e90c24d7eaca4eeddb8684aee625dffff1f85b1b4fa72af4b5c206bbee"} Jan 28 18:17:03 crc kubenswrapper[4985]: I0128 18:17:03.097116 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerDied","Data":"c6a6370de55c9f1d322d443a680768dd95b5a50ccc8cfbead3f597f6cb81b47b"} Jan 28 18:17:03 crc kubenswrapper[4985]: I0128 18:17:03.096747 4985 generic.go:334] "Generic (PLEG): container finished" podID="478dee72-717a-448e-b14d-15d600c82eb5" containerID="c6a6370de55c9f1d322d443a680768dd95b5a50ccc8cfbead3f597f6cb81b47b" exitCode=0 Jan 28 18:17:03 crc kubenswrapper[4985]: I0128 18:17:03.103073 4985 generic.go:334] "Generic (PLEG): container finished" podID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerID="82b69880adf61999e4575782c5ecaafe22c81d0a0e17bab967aa245eeb683a6c" exitCode=0 Jan 28 18:17:03 crc kubenswrapper[4985]: I0128 18:17:03.103106 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerDied","Data":"82b69880adf61999e4575782c5ecaafe22c81d0a0e17bab967aa245eeb683a6c"} Jan 28 18:17:05 crc kubenswrapper[4985]: I0128 18:17:05.514151 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5869bdf574-ch68d"] Jan 28 18:17:05 crc kubenswrapper[4985]: I0128 18:17:05.514797 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" podUID="c548c555-f5c2-4b49-83f4-ba501eb53a19" containerName="controller-manager" containerID="cri-o://aca47457e78cbdad7584b3f87da1ee68b51f7fcffc325c44756fb3b2a97df8ce" gracePeriod=30 Jan 28 18:17:05 crc kubenswrapper[4985]: I0128 18:17:05.611686 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc"] Jan 28 18:17:05 crc kubenswrapper[4985]: I0128 18:17:05.611975 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" podUID="c9a55227-f583-4f77-845f-9938b41aad05" containerName="route-controller-manager" containerID="cri-o://230a32e1704bbf1bfdb865092f83b3a4dcbb6f3d1684e2401748ed37926d4bea" gracePeriod=30 Jan 28 18:17:06 crc kubenswrapper[4985]: I0128 18:17:06.237762 4985 generic.go:334] "Generic (PLEG): container finished" podID="c548c555-f5c2-4b49-83f4-ba501eb53a19" containerID="aca47457e78cbdad7584b3f87da1ee68b51f7fcffc325c44756fb3b2a97df8ce" exitCode=0 Jan 28 18:17:06 crc kubenswrapper[4985]: I0128 18:17:06.237898 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" event={"ID":"c548c555-f5c2-4b49-83f4-ba501eb53a19","Type":"ContainerDied","Data":"aca47457e78cbdad7584b3f87da1ee68b51f7fcffc325c44756fb3b2a97df8ce"} Jan 28 18:17:06 crc kubenswrapper[4985]: I0128 18:17:06.239874 4985 generic.go:334] "Generic (PLEG): container finished" podID="c9a55227-f583-4f77-845f-9938b41aad05" containerID="230a32e1704bbf1bfdb865092f83b3a4dcbb6f3d1684e2401748ed37926d4bea" exitCode=0 Jan 28 18:17:06 crc kubenswrapper[4985]: I0128 18:17:06.239958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" event={"ID":"c9a55227-f583-4f77-845f-9938b41aad05","Type":"ContainerDied","Data":"230a32e1704bbf1bfdb865092f83b3a4dcbb6f3d1684e2401748ed37926d4bea"} Jan 28 18:17:06 crc kubenswrapper[4985]: I0128 18:17:06.242458 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngcsk" event={"ID":"ff1a5336-5c99-49fa-bb89-311781866770","Type":"ContainerStarted","Data":"d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82"} Jan 28 18:17:06 crc kubenswrapper[4985]: I0128 18:17:06.274864 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ngcsk" podStartSLOduration=5.266846474 podStartE2EDuration="1m19.274841308s" podCreationTimestamp="2026-01-28 18:15:47 +0000 UTC" firstStartedPulling="2026-01-28 18:15:50.308294941 +0000 UTC m=+161.134857762" lastFinishedPulling="2026-01-28 18:17:04.316289765 +0000 UTC m=+235.142852596" observedRunningTime="2026-01-28 18:17:06.272227254 +0000 UTC m=+237.098790085" watchObservedRunningTime="2026-01-28 18:17:06.274841308 +0000 UTC m=+237.101404139" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.080774 4985 patch_prober.go:28] interesting pod/route-controller-manager-76d5df6584-ppscc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.081181 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" podUID="c9a55227-f583-4f77-845f-9938b41aad05" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.365338 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.399266 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5"] Jan 28 18:17:07 crc kubenswrapper[4985]: E0128 18:17:07.399856 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a55227-f583-4f77-845f-9938b41aad05" containerName="route-controller-manager" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.399974 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a55227-f583-4f77-845f-9938b41aad05" containerName="route-controller-manager" Jan 28 18:17:07 crc kubenswrapper[4985]: E0128 18:17:07.400066 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490ef8c2-c2f7-4661-9016-d6bbadb543ff" containerName="pruner" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.400146 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="490ef8c2-c2f7-4661-9016-d6bbadb543ff" containerName="pruner" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.400354 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a55227-f583-4f77-845f-9938b41aad05" containerName="route-controller-manager" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.400445 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="490ef8c2-c2f7-4661-9016-d6bbadb543ff" containerName="pruner" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.401113 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.406929 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5"] Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482233 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfgxb\" (UniqueName: \"kubernetes.io/projected/c9a55227-f583-4f77-845f-9938b41aad05-kube-api-access-gfgxb\") pod \"c9a55227-f583-4f77-845f-9938b41aad05\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482378 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a55227-f583-4f77-845f-9938b41aad05-serving-cert\") pod \"c9a55227-f583-4f77-845f-9938b41aad05\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482513 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-client-ca\") pod \"c9a55227-f583-4f77-845f-9938b41aad05\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482577 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-config\") pod \"c9a55227-f583-4f77-845f-9938b41aad05\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482754 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9lxb\" (UniqueName: \"kubernetes.io/projected/e5f99d20-5afa-4144-b66e-9198c1d6c66d-kube-api-access-q9lxb\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482826 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f99d20-5afa-4144-b66e-9198c1d6c66d-serving-cert\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482890 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-config\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.483045 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-client-ca\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.483614 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-client-ca" (OuterVolumeSpecName: "client-ca") pod "c9a55227-f583-4f77-845f-9938b41aad05" (UID: "c9a55227-f583-4f77-845f-9938b41aad05"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.484106 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-config" (OuterVolumeSpecName: "config") pod "c9a55227-f583-4f77-845f-9938b41aad05" (UID: "c9a55227-f583-4f77-845f-9938b41aad05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.492915 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a55227-f583-4f77-845f-9938b41aad05-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c9a55227-f583-4f77-845f-9938b41aad05" (UID: "c9a55227-f583-4f77-845f-9938b41aad05"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.493500 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a55227-f583-4f77-845f-9938b41aad05-kube-api-access-gfgxb" (OuterVolumeSpecName: "kube-api-access-gfgxb") pod "c9a55227-f583-4f77-845f-9938b41aad05" (UID: "c9a55227-f583-4f77-845f-9938b41aad05"). InnerVolumeSpecName "kube-api-access-gfgxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584604 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-config\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584708 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-client-ca\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584744 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9lxb\" (UniqueName: \"kubernetes.io/projected/e5f99d20-5afa-4144-b66e-9198c1d6c66d-kube-api-access-q9lxb\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584777 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f99d20-5afa-4144-b66e-9198c1d6c66d-serving-cert\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584922 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584937 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584949 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfgxb\" (UniqueName: \"kubernetes.io/projected/c9a55227-f583-4f77-845f-9938b41aad05-kube-api-access-gfgxb\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584989 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a55227-f583-4f77-845f-9938b41aad05-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.586534 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-client-ca\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.586957 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-config\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.590066 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f99d20-5afa-4144-b66e-9198c1d6c66d-serving-cert\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.604702 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9lxb\" (UniqueName: \"kubernetes.io/projected/e5f99d20-5afa-4144-b66e-9198c1d6c66d-kube-api-access-q9lxb\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.721290 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.890922 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.988871 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-client-ca\") pod \"c548c555-f5c2-4b49-83f4-ba501eb53a19\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.989005 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw442\" (UniqueName: \"kubernetes.io/projected/c548c555-f5c2-4b49-83f4-ba501eb53a19-kube-api-access-fw442\") pod \"c548c555-f5c2-4b49-83f4-ba501eb53a19\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.989057 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-config\") pod \"c548c555-f5c2-4b49-83f4-ba501eb53a19\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.989091 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c548c555-f5c2-4b49-83f4-ba501eb53a19-serving-cert\") pod \"c548c555-f5c2-4b49-83f4-ba501eb53a19\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.989120 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-proxy-ca-bundles\") pod \"c548c555-f5c2-4b49-83f4-ba501eb53a19\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.989980 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-client-ca" (OuterVolumeSpecName: "client-ca") pod "c548c555-f5c2-4b49-83f4-ba501eb53a19" (UID: "c548c555-f5c2-4b49-83f4-ba501eb53a19"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.990092 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c548c555-f5c2-4b49-83f4-ba501eb53a19" (UID: "c548c555-f5c2-4b49-83f4-ba501eb53a19"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.990287 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-config" (OuterVolumeSpecName: "config") pod "c548c555-f5c2-4b49-83f4-ba501eb53a19" (UID: "c548c555-f5c2-4b49-83f4-ba501eb53a19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.994144 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c548c555-f5c2-4b49-83f4-ba501eb53a19-kube-api-access-fw442" (OuterVolumeSpecName: "kube-api-access-fw442") pod "c548c555-f5c2-4b49-83f4-ba501eb53a19" (UID: "c548c555-f5c2-4b49-83f4-ba501eb53a19"). InnerVolumeSpecName "kube-api-access-fw442". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.994184 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c548c555-f5c2-4b49-83f4-ba501eb53a19-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c548c555-f5c2-4b49-83f4-ba501eb53a19" (UID: "c548c555-f5c2-4b49-83f4-ba501eb53a19"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.087151 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.087233 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.090760 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c548c555-f5c2-4b49-83f4-ba501eb53a19-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.091317 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.091336 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.091350 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw442\" (UniqueName: \"kubernetes.io/projected/c548c555-f5c2-4b49-83f4-ba501eb53a19-kube-api-access-fw442\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.091366 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.150857 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.255235 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" event={"ID":"c9a55227-f583-4f77-845f-9938b41aad05","Type":"ContainerDied","Data":"0c1baf91463f290c3d892cb40e61b3d124856adc600caf5a5be88ecc069eded5"} Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.255323 4985 scope.go:117] "RemoveContainer" containerID="230a32e1704bbf1bfdb865092f83b3a4dcbb6f3d1684e2401748ed37926d4bea" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.255334 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.257368 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.257399 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" event={"ID":"c548c555-f5c2-4b49-83f4-ba501eb53a19","Type":"ContainerDied","Data":"ad2adfb876654b6fefd1ea75de1738cfc3935a2a867a3438609617e943e0d7b9"} Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.289679 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5869bdf574-ch68d"] Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.294765 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5869bdf574-ch68d"] Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.299577 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc"] Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.302753 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc"] Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.506361 4985 scope.go:117] "RemoveContainer" containerID="aca47457e78cbdad7584b3f87da1ee68b51f7fcffc325c44756fb3b2a97df8ce" Jan 28 18:17:09 crc kubenswrapper[4985]: I0128 18:17:09.276097 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c548c555-f5c2-4b49-83f4-ba501eb53a19" path="/var/lib/kubelet/pods/c548c555-f5c2-4b49-83f4-ba501eb53a19/volumes" Jan 28 18:17:09 crc kubenswrapper[4985]: I0128 18:17:09.276685 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9a55227-f583-4f77-845f-9938b41aad05" path="/var/lib/kubelet/pods/c9a55227-f583-4f77-845f-9938b41aad05/volumes" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.018539 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6"] Jan 28 18:17:10 crc kubenswrapper[4985]: E0128 18:17:10.018835 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c548c555-f5c2-4b49-83f4-ba501eb53a19" containerName="controller-manager" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.018851 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c548c555-f5c2-4b49-83f4-ba501eb53a19" containerName="controller-manager" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.018978 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c548c555-f5c2-4b49-83f4-ba501eb53a19" containerName="controller-manager" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.019636 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.026305 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.026676 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.026988 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.027140 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.027306 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.027460 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.030936 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6"] Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.035787 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.118769 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-config\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.118840 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-proxy-ca-bundles\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.118891 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkcw4\" (UniqueName: \"kubernetes.io/projected/eefb5804-82d5-488f-a5c4-5473107ffbcd-kube-api-access-hkcw4\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.118944 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-client-ca\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.118992 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eefb5804-82d5-488f-a5c4-5473107ffbcd-serving-cert\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.219726 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eefb5804-82d5-488f-a5c4-5473107ffbcd-serving-cert\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.219802 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-config\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.219826 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-proxy-ca-bundles\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.219845 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkcw4\" (UniqueName: \"kubernetes.io/projected/eefb5804-82d5-488f-a5c4-5473107ffbcd-kube-api-access-hkcw4\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.219875 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-client-ca\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.290270 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-client-ca\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.291137 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-proxy-ca-bundles\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.295578 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkcw4\" (UniqueName: \"kubernetes.io/projected/eefb5804-82d5-488f-a5c4-5473107ffbcd-kube-api-access-hkcw4\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.296591 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eefb5804-82d5-488f-a5c4-5473107ffbcd-serving-cert\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.321589 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-config\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.349110 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.740284 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5"] Jan 28 18:17:12 crc kubenswrapper[4985]: I0128 18:17:12.283559 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" event={"ID":"e5f99d20-5afa-4144-b66e-9198c1d6c66d","Type":"ContainerStarted","Data":"61b704f839468f67ac0c3f15e67acd552ecf612f482f58ba44a89c002ae8c45b"} Jan 28 18:17:18 crc kubenswrapper[4985]: I0128 18:17:18.146702 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:17:18 crc kubenswrapper[4985]: I0128 18:17:18.210602 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ngcsk"] Jan 28 18:17:18 crc kubenswrapper[4985]: I0128 18:17:18.333880 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ngcsk" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="registry-server" containerID="cri-o://d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" gracePeriod=2 Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.455805 4985 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457092 4985 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457280 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457599 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44" gracePeriod=15 Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457667 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6" gracePeriod=15 Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457642 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861" gracePeriod=15 Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457746 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0" gracePeriod=15 Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457765 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a" gracePeriod=15 Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.459980 4985 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460362 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460399 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460420 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460435 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460464 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460476 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460494 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460506 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460519 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460531 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460549 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460561 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460578 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460591 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460799 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460817 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460835 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460853 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460873 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460888 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.461072 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.461086 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.461348 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.524168 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622108 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622221 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622285 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622326 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622470 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622505 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622654 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.723411 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.723823 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.723958 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724067 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.723582 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.723877 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724283 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724411 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724528 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724630 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724733 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724287 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724523 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724685 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724575 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.725102 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.817496 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:22 crc kubenswrapper[4985]: I0128 18:17:22.413399 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 28 18:17:22 crc kubenswrapper[4985]: I0128 18:17:22.413475 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 28 18:17:24 crc kubenswrapper[4985]: E0128 18:17:24.473483 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-58qq5.188ef7dfddb617e6 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-58qq5,UID:ee77ca55-8cd0-4401-afec-9817fee5f6bb,APIVersion:v1,ResourceVersion:28142,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,LastTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:17:25 crc kubenswrapper[4985]: I0128 18:17:25.391100 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58qq5" event={"ID":"ee77ca55-8cd0-4401-afec-9817fee5f6bb","Type":"ContainerStarted","Data":"01763e3cd2bd1b7e7c641c4d3e6204a47e371f36ee82046acaa6ead5f63ffa58"} Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.166691 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.167311 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.168155 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.168663 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.169039 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:26 crc kubenswrapper[4985]: I0128 18:17:26.169088 4985 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.169492 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="200ms" Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.370319 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="400ms" Jan 28 18:17:26 crc kubenswrapper[4985]: I0128 18:17:26.400787 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:17:26 crc kubenswrapper[4985]: I0128 18:17:26.402515 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:17:26 crc kubenswrapper[4985]: I0128 18:17:26.403442 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a" exitCode=2 Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.771592 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="800ms" Jan 28 18:17:27 crc kubenswrapper[4985]: I0128 18:17:27.410663 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ngcsk_ff1a5336-5c99-49fa-bb89-311781866770/registry-server/0.log" Jan 28 18:17:27 crc kubenswrapper[4985]: I0128 18:17:27.411630 4985 generic.go:334] "Generic (PLEG): container finished" podID="ff1a5336-5c99-49fa-bb89-311781866770" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" exitCode=137 Jan 28 18:17:27 crc kubenswrapper[4985]: I0128 18:17:27.411680 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngcsk" event={"ID":"ff1a5336-5c99-49fa-bb89-311781866770","Type":"ContainerDied","Data":"d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82"} Jan 28 18:17:27 crc kubenswrapper[4985]: E0128 18:17:27.572055 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="1.6s" Jan 28 18:17:28 crc kubenswrapper[4985]: E0128 18:17:28.087544 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:17:28 crc kubenswrapper[4985]: E0128 18:17:28.088207 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:17:28 crc kubenswrapper[4985]: E0128 18:17:28.088735 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:17:28 crc kubenswrapper[4985]: E0128 18:17:28.088772 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-ngcsk" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="registry-server" Jan 28 18:17:28 crc kubenswrapper[4985]: I0128 18:17:28.420931 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:17:28 crc kubenswrapper[4985]: I0128 18:17:28.423232 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:17:28 crc kubenswrapper[4985]: I0128 18:17:28.424225 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0" exitCode=0 Jan 28 18:17:28 crc kubenswrapper[4985]: I0128 18:17:28.424292 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6" exitCode=0 Jan 28 18:17:28 crc kubenswrapper[4985]: E0128 18:17:28.733162 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-58qq5.188ef7dfddb617e6 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-58qq5,UID:ee77ca55-8cd0-4401-afec-9817fee5f6bb,APIVersion:v1,ResourceVersion:28142,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,LastTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:17:29 crc kubenswrapper[4985]: E0128 18:17:29.173703 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="3.2s" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.437713 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.441883 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.443551 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861" exitCode=0 Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.443610 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44" exitCode=0 Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.443653 4985 scope.go:117] "RemoveContainer" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.449699 4985 generic.go:334] "Generic (PLEG): container finished" podID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" containerID="3c8ef3ffe3a3beb101ee44bb4477a152e2c2c1d60d8d32877bb5661a8b94361c" exitCode=0 Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.449930 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9","Type":"ContainerDied","Data":"3c8ef3ffe3a3beb101ee44bb4477a152e2c2c1d60d8d32877bb5661a8b94361c"} Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.451029 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.451751 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.452193 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.452667 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.453163 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:31 crc kubenswrapper[4985]: I0128 18:17:31.269359 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:31 crc kubenswrapper[4985]: I0128 18:17:31.270001 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:31 crc kubenswrapper[4985]: I0128 18:17:31.270697 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:32 crc kubenswrapper[4985]: E0128 18:17:32.343118 4985 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.195:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" volumeName="registry-storage" Jan 28 18:17:32 crc kubenswrapper[4985]: E0128 18:17:32.375969 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="6.4s" Jan 28 18:17:34 crc kubenswrapper[4985]: I0128 18:17:34.348007 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 18:17:34 crc kubenswrapper[4985]: I0128 18:17:34.348729 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.503450 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.503533 4985 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db" exitCode=1 Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.503590 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db"} Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.504327 4985 scope.go:117] "RemoveContainer" containerID="e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db" Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.505071 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.505769 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.507677 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.508223 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.572746 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.667538 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.667629 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.730987 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.737374 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.738270 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.738681 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.739176 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.739649 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:38 crc kubenswrapper[4985]: E0128 18:17:38.088161 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:17:38 crc kubenswrapper[4985]: E0128 18:17:38.089160 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:17:38 crc kubenswrapper[4985]: E0128 18:17:38.090424 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:17:38 crc kubenswrapper[4985]: E0128 18:17:38.090536 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-ngcsk" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="registry-server" Jan 28 18:17:38 crc kubenswrapper[4985]: I0128 18:17:38.588416 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:17:38 crc kubenswrapper[4985]: I0128 18:17:38.589368 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:38 crc kubenswrapper[4985]: I0128 18:17:38.590145 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:38 crc kubenswrapper[4985]: I0128 18:17:38.591280 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:38 crc kubenswrapper[4985]: I0128 18:17:38.592392 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:38 crc kubenswrapper[4985]: E0128 18:17:38.735283 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-58qq5.188ef7dfddb617e6 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-58qq5,UID:ee77ca55-8cd0-4401-afec-9817fee5f6bb,APIVersion:v1,ResourceVersion:28142,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,LastTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:38.777853 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="7s" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.089092 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.090090 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.090663 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.091393 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.091718 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.115512 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kube-api-access\") pod \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.115590 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-var-lock\") pod \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.115680 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kubelet-dir\") pod \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.115765 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-var-lock" (OuterVolumeSpecName: "var-lock") pod "a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" (UID: "a97e98d6-b3fb-4d0b-a91e-00e4d18089c9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.115936 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" (UID: "a97e98d6-b3fb-4d0b-a91e-00e4d18089c9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.116375 4985 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.116399 4985 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.125311 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" (UID: "a97e98d6-b3fb-4d0b-a91e-00e4d18089c9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.217343 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.273075 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.273835 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.274736 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.275378 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.545069 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9","Type":"ContainerDied","Data":"f249e6a9045822ac8356aabfe2373c714fcb3fec9f0635e367520cd44059c81b"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.545136 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f249e6a9045822ac8356aabfe2373c714fcb3fec9f0635e367520cd44059c81b" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.545185 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.552404 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.553112 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.553944 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.554571 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.070939 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.071775 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.072207 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.072701 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.073422 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.073602 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.073854 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.074671 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ngcsk_ff1a5336-5c99-49fa-bb89-311781866770/registry-server/0.log" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.075559 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.076269 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.076581 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.076998 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.077395 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.077630 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.077931 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.131835 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glps2\" (UniqueName: \"kubernetes.io/projected/ff1a5336-5c99-49fa-bb89-311781866770-kube-api-access-glps2\") pod \"ff1a5336-5c99-49fa-bb89-311781866770\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.131947 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.131992 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132027 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132169 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-catalog-content\") pod \"ff1a5336-5c99-49fa-bb89-311781866770\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132166 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132150 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132205 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-utilities\") pod \"ff1a5336-5c99-49fa-bb89-311781866770\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132982 4985 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.133020 4985 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132992 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-utilities" (OuterVolumeSpecName: "utilities") pod "ff1a5336-5c99-49fa-bb89-311781866770" (UID: "ff1a5336-5c99-49fa-bb89-311781866770"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132123 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.137929 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1a5336-5c99-49fa-bb89-311781866770-kube-api-access-glps2" (OuterVolumeSpecName: "kube-api-access-glps2") pod "ff1a5336-5c99-49fa-bb89-311781866770" (UID: "ff1a5336-5c99-49fa-bb89-311781866770"). InnerVolumeSpecName "kube-api-access-glps2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.234723 4985 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.234766 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.234786 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glps2\" (UniqueName: \"kubernetes.io/projected/ff1a5336-5c99-49fa-bb89-311781866770-kube-api-access-glps2\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.556654 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.557570 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.559193 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ngcsk_ff1a5336-5c99-49fa-bb89-311781866770/registry-server/0.log" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.560134 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngcsk" event={"ID":"ff1a5336-5c99-49fa-bb89-311781866770","Type":"ContainerDied","Data":"443d55c2efdfe0f8e6f7fa0e88bf057b626e08f470a93af561b93e9387fb0988"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.560298 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.561595 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.562196 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.562773 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.563388 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.566230 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.566977 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.575129 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.575508 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.575815 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.576136 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.576977 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.577387 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:43.290967 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:44.347683 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.539491 4985 scope.go:117] "RemoveContainer" containerID="7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861" Jan 28 18:17:55 crc kubenswrapper[4985]: W0128 18:17:45.596073 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-da5c3365696303ccc352a5b0405df920571d579c6b1c1efd838229e335c6e2cc WatchSource:0}: Error finding container da5c3365696303ccc352a5b0405df920571d579c6b1c1efd838229e335c6e2cc: Status 404 returned error can't find the container with id da5c3365696303ccc352a5b0405df920571d579c6b1c1efd838229e335c6e2cc Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.600241 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.645057 4985 scope.go:117] "RemoveContainer" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:45.659609 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\": container with ID starting with 58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4 not found: ID does not exist" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.659990 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4"} err="failed to get container status \"58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\": rpc error: code = NotFound desc = could not find container \"58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\": container with ID starting with 58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4 not found: ID does not exist" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.660029 4985 scope.go:117] "RemoveContainer" containerID="094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.734135 4985 scope.go:117] "RemoveContainer" containerID="270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.765499 4985 scope.go:117] "RemoveContainer" containerID="001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:45.778754 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="7s" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.795010 4985 scope.go:117] "RemoveContainer" containerID="88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.816598 4985 scope.go:117] "RemoveContainer" containerID="ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.858343 4985 scope.go:117] "RemoveContainer" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.881874 4985 scope.go:117] "RemoveContainer" containerID="3b65c4cdfefa99481aa1051361932ec6ad9c250e75289c86b535f66431840968" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.897697 4985 scope.go:117] "RemoveContainer" containerID="081b66f566faa6677cfda3978e83d93b4dce7e5760fe6c65c107d2c177beeb71" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.221137 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff1a5336-5c99-49fa-bb89-311781866770" (UID: "ff1a5336-5c99-49fa-bb89-311781866770"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.303604 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.491902 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.492880 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.493465 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.493951 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.494494 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.619041 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"da5c3365696303ccc352a5b0405df920571d579c6b1c1efd838229e335c6e2cc"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.263342 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.264950 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.265708 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.266149 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.266562 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.266950 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.286999 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.287037 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:47.287685 4985 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.288423 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: W0128 18:17:47.320349 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-7ef78ab0fad28bb68e3f5443a429f16a3dd5218795b594c148cacaa1a2477f25 WatchSource:0}: Error finding container 7ef78ab0fad28bb68e3f5443a429f16a3dd5218795b594c148cacaa1a2477f25: Status 404 returned error can't find the container with id 7ef78ab0fad28bb68e3f5443a429f16a3dd5218795b594c148cacaa1a2477f25 Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.630990 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerStarted","Data":"eece386460fc88f0d1b18e248446179390fd7a1f344e841dca3acc21b1822f34"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.633171 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7ef78ab0fad28bb68e3f5443a429f16a3dd5218795b594c148cacaa1a2477f25"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.645524 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.645632 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0025f144f3fa7cc81c86c1fe0e47ad15fbc5caa56b23b223f51fe0e0fd77569e"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.648022 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vq448" event={"ID":"bebbf794-5459-4a75-bff1-92b7551d4784","Type":"ContainerStarted","Data":"31e46ecf03175187af44eda5b4ce7d1101b0c4c1d73c57a447c29b34599240ab"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.652006 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkbjb" event={"ID":"4bec6c8f-9678-463c-9e09-5b8e362f2f1b","Type":"ContainerStarted","Data":"3d8cc26a1796f2bc2a7c499cb4517a2ba0d12df76aaa21278ad3e99d353f0c68"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.657388 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerStarted","Data":"98509779ffc57e66e6d647b66aa2cfccf18d2d4bea5c3dca3fa2e44328a38480"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.658913 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" event={"ID":"e5f99d20-5afa-4144-b66e-9198c1d6c66d","Type":"ContainerStarted","Data":"84b3d1329602db518e01bb880483420a7b93445de8d4de35994516e44034e79f"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.661859 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkflh" event={"ID":"d797afdd-19c6-45ed-81c8-5fa31175e121","Type":"ContainerStarted","Data":"9a773729ce7da9456028db66191225dafec61202d13d13e3c0cf77e40d3a65a1"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.663446 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ab7d18f55611d02a03d62a6ebace75ed35b7b1a319a4367884bd6c2504dce01f"} Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:48.736133 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-58qq5.188ef7dfddb617e6 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-58qq5,UID:ee77ca55-8cd0-4401-afec-9817fee5f6bb,APIVersion:v1,ResourceVersion:28142,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,LastTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.678856 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.679658 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.679867 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.679992 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.680067 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.680332 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.680574 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.680810 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.680953 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.681110 4985 status_manager.go:851] "Failed to get status for pod" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" pod="openshift-marketplace/redhat-operators-zcwgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zcwgk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.681317 4985 status_manager.go:851] "Failed to get status for pod" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5746676d8-2r8p5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.681541 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.681802 4985 status_manager.go:851] "Failed to get status for pod" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" pod="openshift-marketplace/redhat-marketplace-vq448" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vq448\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.681970 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.682112 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.682281 4985 status_manager.go:851] "Failed to get status for pod" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" pod="openshift-marketplace/community-operators-tkbjb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tkbjb\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.682453 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.682602 4985 status_manager.go:851] "Failed to get status for pod" podUID="478dee72-717a-448e-b14d-15d600c82eb5" pod="openshift-marketplace/redhat-operators-2zfzc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2zfzc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:49.870657 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:17:49Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:17:49Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:17:49Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:17:49Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:2c1439ebdda893daf377def2d4397762658d82b531bb83f7ae41a4e7f26d4407\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:c044fa5dc076cb0fb053c5a676c39093e5fd06f6cc0eeaff8a747680c99c8b7f\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1675724519},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:40a0af9b58137c413272f3533763f7affd5db97e6ef410a6aeabce6d81a246ee\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7e9b6f6bdbfa69f6106bc85eaee51d908ede4be851b578362af443af6bf732a8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202031349},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:364f5956de22b63db7dad4fcdd1f2740f71a482026c15aa3e2abebfbc5bf2fd7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d3d262f90dd0f3c3f809b45f327ca086741a47f73e44560b04787609f0f99567\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1187310829},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:49.871406 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:49.872053 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:49.872639 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:49.873282 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:49.873320 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.945509 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.945705 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.002938 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.003722 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.004371 4985 status_manager.go:851] "Failed to get status for pod" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" pod="openshift-marketplace/redhat-operators-zcwgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zcwgk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.004859 4985 status_manager.go:851] "Failed to get status for pod" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5746676d8-2r8p5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.005359 4985 status_manager.go:851] "Failed to get status for pod" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" pod="openshift-marketplace/redhat-marketplace-vq448" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vq448\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.005721 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.006197 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.006623 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.006870 4985 status_manager.go:851] "Failed to get status for pod" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" pod="openshift-marketplace/community-operators-tkbjb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tkbjb\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.007054 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.007240 4985 status_manager.go:851] "Failed to get status for pod" podUID="478dee72-717a-448e-b14d-15d600c82eb5" pod="openshift-marketplace/redhat-operators-2zfzc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2zfzc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.007483 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.254218 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.254311 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.294588 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.295519 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.296193 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.296844 4985 status_manager.go:851] "Failed to get status for pod" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" pod="openshift-marketplace/community-operators-tkbjb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tkbjb\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.297196 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.297673 4985 status_manager.go:851] "Failed to get status for pod" podUID="478dee72-717a-448e-b14d-15d600c82eb5" pod="openshift-marketplace/redhat-operators-2zfzc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2zfzc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.298116 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.298523 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.298891 4985 status_manager.go:851] "Failed to get status for pod" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" pod="openshift-marketplace/redhat-operators-zcwgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zcwgk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.299304 4985 status_manager.go:851] "Failed to get status for pod" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5746676d8-2r8p5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.299662 4985 status_manager.go:851] "Failed to get status for pod" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" pod="openshift-marketplace/redhat-marketplace-vq448" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vq448\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.300071 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.680023 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.680099 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.888332 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.888403 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.270695 4985 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.271255 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.271628 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.272011 4985 status_manager.go:851] "Failed to get status for pod" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" pod="openshift-marketplace/community-operators-tkbjb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tkbjb\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.272879 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.273738 4985 status_manager.go:851] "Failed to get status for pod" podUID="478dee72-717a-448e-b14d-15d600c82eb5" pod="openshift-marketplace/redhat-operators-2zfzc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2zfzc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.274172 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.274408 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.274603 4985 status_manager.go:851] "Failed to get status for pod" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" pod="openshift-marketplace/redhat-operators-zcwgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zcwgk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.274641 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.274676 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.274869 4985 status_manager.go:851] "Failed to get status for pod" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5746676d8-2r8p5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.275089 4985 status_manager.go:851] "Failed to get status for pod" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" pod="openshift-marketplace/redhat-marketplace-vq448" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vq448\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.275366 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.685739 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.685814 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.694176 4985 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="7fc250dcdccc741c807afcb3a8ac8715854616989d2d2a8934a498aee980197f" exitCode=0 Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.694378 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"7fc250dcdccc741c807afcb3a8ac8715854616989d2d2a8934a498aee980197f"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.951464 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zcwgk" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="registry-server" probeResult="failure" output=< Jan 28 18:17:55 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:17:55 crc kubenswrapper[4985]: > Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:52.310660 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2zfzc" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="registry-server" probeResult="failure" output=< Jan 28 18:17:55 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:17:55 crc kubenswrapper[4985]: > Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:52.780181 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="7s" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.715247 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.715830 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.715985 4985 status_manager.go:851] "Failed to get status for pod" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" pod="openshift-marketplace/redhat-operators-zcwgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zcwgk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:54.716412 4985 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.716784 4985 status_manager.go:851] "Failed to get status for pod" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5746676d8-2r8p5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.717336 4985 status_manager.go:851] "Failed to get status for pod" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" pod="openshift-marketplace/redhat-marketplace-vq448" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vq448\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.717786 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.718343 4985 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.718876 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.719412 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.719859 4985 status_manager.go:851] "Failed to get status for pod" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" pod="openshift-marketplace/community-operators-tkbjb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tkbjb\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.720372 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.720869 4985 status_manager.go:851] "Failed to get status for pod" podUID="478dee72-717a-448e-b14d-15d600c82eb5" pod="openshift-marketplace/redhat-operators-2zfzc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2zfzc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.721386 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.721828 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:56 crc kubenswrapper[4985]: E0128 18:17:56.014386 4985 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 28 18:17:56 crc kubenswrapper[4985]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e" Netns:"/var/run/netns/4eb08ed8-3d76-4238-9b6a-71757c20ed1a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:17:56 crc kubenswrapper[4985]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 18:17:56 crc kubenswrapper[4985]: > Jan 28 18:17:56 crc kubenswrapper[4985]: E0128 18:17:56.014580 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 28 18:17:56 crc kubenswrapper[4985]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e" Netns:"/var/run/netns/4eb08ed8-3d76-4238-9b6a-71757c20ed1a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:17:56 crc kubenswrapper[4985]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 18:17:56 crc kubenswrapper[4985]: > pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:56 crc kubenswrapper[4985]: E0128 18:17:56.014609 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 28 18:17:56 crc kubenswrapper[4985]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e" Netns:"/var/run/netns/4eb08ed8-3d76-4238-9b6a-71757c20ed1a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:17:56 crc kubenswrapper[4985]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 18:17:56 crc kubenswrapper[4985]: > pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:56 crc kubenswrapper[4985]: E0128 18:17:56.014667 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager(eefb5804-82d5-488f-a5c4-5473107ffbcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager(eefb5804-82d5-488f-a5c4-5473107ffbcd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e\\\" Netns:\\\"/var/run/netns/4eb08ed8-3d76-4238-9b6a-71757c20ed1a\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s\\\": dial tcp 38.102.83.195:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" Jan 28 18:17:56 crc kubenswrapper[4985]: I0128 18:17:56.727643 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:56 crc kubenswrapper[4985]: I0128 18:17:56.728707 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:57 crc kubenswrapper[4985]: E0128 18:17:57.403404 4985 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 28 18:17:57 crc kubenswrapper[4985]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5" Netns:"/var/run/netns/39cb782c-5ce0-470e-9072-793910fd8755" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:17:57 crc kubenswrapper[4985]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 18:17:57 crc kubenswrapper[4985]: > Jan 28 18:17:57 crc kubenswrapper[4985]: E0128 18:17:57.403821 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 28 18:17:57 crc kubenswrapper[4985]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5" Netns:"/var/run/netns/39cb782c-5ce0-470e-9072-793910fd8755" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:17:57 crc kubenswrapper[4985]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 18:17:57 crc kubenswrapper[4985]: > pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:57 crc kubenswrapper[4985]: E0128 18:17:57.403852 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 28 18:17:57 crc kubenswrapper[4985]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5" Netns:"/var/run/netns/39cb782c-5ce0-470e-9072-793910fd8755" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:17:57 crc kubenswrapper[4985]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 18:17:57 crc kubenswrapper[4985]: > pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:57 crc kubenswrapper[4985]: E0128 18:17:57.403930 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager(eefb5804-82d5-488f-a5c4-5473107ffbcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager(eefb5804-82d5-488f-a5c4-5473107ffbcd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5\\\" Netns:\\\"/var/run/netns/39cb782c-5ce0-470e-9072-793910fd8755\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s\\\": dial tcp 38.102.83.195:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.572155 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.731021 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.738503 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.739251 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.739752 4985 status_manager.go:851] "Failed to get status for pod" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" pod="openshift-marketplace/community-operators-tkbjb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tkbjb\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.740146 4985 status_manager.go:851] "Failed to get status for pod" podUID="478dee72-717a-448e-b14d-15d600c82eb5" pod="openshift-marketplace/redhat-operators-2zfzc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2zfzc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.740717 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.741386 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.741830 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.742369 4985 status_manager.go:851] "Failed to get status for pod" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" pod="openshift-marketplace/redhat-operators-zcwgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zcwgk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.742822 4985 status_manager.go:851] "Failed to get status for pod" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5746676d8-2r8p5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.743200 4985 status_manager.go:851] "Failed to get status for pod" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" pod="openshift-marketplace/redhat-marketplace-vq448" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vq448\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.743650 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.744090 4985 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.744483 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.353153 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.354205 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.409555 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.722977 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.723046 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.745290 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eab48cfc75705407bcf2bbf163efe5df0cb78ef2f172e3537db0797494e3a428"} Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.754517 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.794433 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:17:59 crc kubenswrapper[4985]: I0128 18:17:59.994309 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:18:00 crc kubenswrapper[4985]: I0128 18:18:00.307181 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:18:00 crc kubenswrapper[4985]: I0128 18:18:00.959143 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:18:01 crc kubenswrapper[4985]: I0128 18:18:01.026554 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:18:01 crc kubenswrapper[4985]: I0128 18:18:01.336233 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:18:01 crc kubenswrapper[4985]: I0128 18:18:01.381582 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:18:01 crc kubenswrapper[4985]: I0128 18:18:01.778194 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"df43313715c9b9250dd6b76cc9f81680195396e592f7b9beb1e364154316870d"} Jan 28 18:18:03 crc kubenswrapper[4985]: I0128 18:18:03.789482 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"668a5a1fca394af9b85431e312e789be889070149007fbf6585536a96d26d7e3"} Jan 28 18:18:04 crc kubenswrapper[4985]: I0128 18:18:04.800104 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"867aef87f8404ba4d3244cbda663689a7da1991c53c5c338f80f4de59d8dd642"} Jan 28 18:18:05 crc kubenswrapper[4985]: I0128 18:18:05.815365 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"98788e330c099ce5091b6f6069a917953b2497db56421633098d963cf693ce46"} Jan 28 18:18:05 crc kubenswrapper[4985]: I0128 18:18:05.815902 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:05 crc kubenswrapper[4985]: I0128 18:18:05.815930 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:05 crc kubenswrapper[4985]: I0128 18:18:05.816404 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:05 crc kubenswrapper[4985]: I0128 18:18:05.829622 4985 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:05 crc kubenswrapper[4985]: I0128 18:18:05.839415 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab48cfc75705407bcf2bbf163efe5df0cb78ef2f172e3537db0797494e3a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://668a5a1fca394af9b85431e312e789be889070149007fbf6585536a96d26d7e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:18:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df43313715c9b9250dd6b76cc9f81680195396e592f7b9beb1e364154316870d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98788e330c099ce5091b6f6069a917953b2497db56421633098d963cf693ce46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:18:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867aef87f8404ba4d3244cbda663689a7da1991c53c5c338f80f4de59d8dd642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:18:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Jan 28 18:18:06 crc kubenswrapper[4985]: I0128 18:18:06.834711 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:06 crc kubenswrapper[4985]: I0128 18:18:06.834766 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.289444 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.289729 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.295547 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.299347 4985 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8947ea9f-4373-478d-b3c5-ea73f8a66c61" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.841408 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.841974 4985 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531" exitCode=1 Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.842067 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531"} Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.842568 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.842599 4985 scope.go:117] "RemoveContainer" containerID="8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.842613 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.721980 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.722678 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.853712 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.857357 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.857651 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.857581 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"31af50e34fa620a5f81294ac0c220bee2c83cbdfd6c8e6b71423c865edabfac5"} Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.865799 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:09 crc kubenswrapper[4985]: I0128 18:18:09.864144 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:09 crc kubenswrapper[4985]: I0128 18:18:09.864199 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.263724 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.264678 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.701653 4985 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.874019 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" event={"ID":"eefb5804-82d5-488f-a5c4-5473107ffbcd","Type":"ContainerStarted","Data":"a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1"} Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.874506 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" event={"ID":"eefb5804-82d5-488f-a5c4-5473107ffbcd","Type":"ContainerStarted","Data":"5b05bb1b67bf56c71462a79b529ac2543e0047903c359f6e9fac94a35e5f7aac"} Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.874855 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.877005 4985 patch_prober.go:28] interesting pod/controller-manager-7f8cf88bf9-bvxk6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.877095 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 28 18:18:11 crc kubenswrapper[4985]: I0128 18:18:11.328417 4985 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8947ea9f-4373-478d-b3c5-ea73f8a66c61" Jan 28 18:18:11 crc kubenswrapper[4985]: I0128 18:18:11.886986 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.803569 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:35284->10.217.0.58:8443: read: connection reset by peer" start-of-body= Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.804536 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:35284->10.217.0.58:8443: read: connection reset by peer" Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.803659 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:35282->10.217.0.58:8443: read: connection reset by peer" start-of-body= Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.804970 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:35282->10.217.0.58:8443: read: connection reset by peer" Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.927581 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5746676d8-2r8p5_e5f99d20-5afa-4144-b66e-9198c1d6c66d/route-controller-manager/0.log" Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.927644 4985 generic.go:334] "Generic (PLEG): container finished" podID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerID="84b3d1329602db518e01bb880483420a7b93445de8d4de35994516e44034e79f" exitCode=255 Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.927687 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" event={"ID":"e5f99d20-5afa-4144-b66e-9198c1d6c66d","Type":"ContainerDied","Data":"84b3d1329602db518e01bb880483420a7b93445de8d4de35994516e44034e79f"} Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.928313 4985 scope.go:117] "RemoveContainer" containerID="84b3d1329602db518e01bb880483420a7b93445de8d4de35994516e44034e79f" Jan 28 18:18:18 crc kubenswrapper[4985]: I0128 18:18:18.939171 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5746676d8-2r8p5_e5f99d20-5afa-4144-b66e-9198c1d6c66d/route-controller-manager/0.log" Jan 28 18:18:18 crc kubenswrapper[4985]: I0128 18:18:18.939686 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" event={"ID":"e5f99d20-5afa-4144-b66e-9198c1d6c66d","Type":"ContainerStarted","Data":"c20541f2a2b39f6f832606efb9edd000b3514c07a50e47d18005696fc64446ca"} Jan 28 18:18:18 crc kubenswrapper[4985]: I0128 18:18:18.940405 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:18:19 crc kubenswrapper[4985]: I0128 18:18:19.940320 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:18:19 crc kubenswrapper[4985]: I0128 18:18:19.940413 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:18:20 crc kubenswrapper[4985]: I0128 18:18:20.947156 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:18:20 crc kubenswrapper[4985]: I0128 18:18:20.947233 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:18:28 crc kubenswrapper[4985]: I0128 18:18:28.722377 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:18:28 crc kubenswrapper[4985]: I0128 18:18:28.723525 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:18:30 crc kubenswrapper[4985]: I0128 18:18:30.488547 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 18:18:30 crc kubenswrapper[4985]: I0128 18:18:30.727871 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 18:18:30 crc kubenswrapper[4985]: I0128 18:18:30.902834 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 18:18:31 crc kubenswrapper[4985]: I0128 18:18:31.235038 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 18:18:31 crc kubenswrapper[4985]: I0128 18:18:31.293864 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 18:18:31 crc kubenswrapper[4985]: I0128 18:18:31.846963 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.146371 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.422265 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.651111 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.741840 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.855702 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.960311 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.969626 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.207106 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.405857 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.674323 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.841894 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.884558 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.942769 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.957664 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.980158 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.033070 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.054236 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.065391 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.225177 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.275038 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.331936 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.786321 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.876614 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.995348 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.063169 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.234886 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.431843 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.736791 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.755971 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.775636 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.796193 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.860684 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.073216 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.216901 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.273339 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.481908 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.511699 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.515723 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.698479 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.856819 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.899484 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 18:18:37 crc kubenswrapper[4985]: I0128 18:18:37.062040 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 18:18:37 crc kubenswrapper[4985]: I0128 18:18:37.174047 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 18:18:37 crc kubenswrapper[4985]: I0128 18:18:37.445041 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 18:18:37 crc kubenswrapper[4985]: I0128 18:18:37.527720 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 18:18:37 crc kubenswrapper[4985]: I0128 18:18:37.727613 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.117430 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.180679 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.286975 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.542081 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.612817 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.729433 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.783535 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 18:18:39 crc kubenswrapper[4985]: I0128 18:18:39.060583 4985 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 18:18:39 crc kubenswrapper[4985]: I0128 18:18:39.263708 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 18:18:39 crc kubenswrapper[4985]: I0128 18:18:39.646376 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 18:18:39 crc kubenswrapper[4985]: I0128 18:18:39.684824 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 18:18:39 crc kubenswrapper[4985]: I0128 18:18:39.792674 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.334205 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.351439 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.410876 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.454487 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.611398 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.708535 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.864496 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.058577 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.068860 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.117356 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.170941 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.188613 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.329496 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.384367 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.577110 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.579439 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.962355 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.341685 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.507380 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.761526 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.823341 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.834399 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.889034 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.920234 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.102584 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.145674 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.250564 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.257800 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.401726 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.517941 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.699239 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.730232 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.766439 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.906628 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.337968 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.362800 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.376099 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.400096 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.488064 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.556719 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.563577 4985 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.056672 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.059139 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.119679 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.334038 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.429422 4985 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.431287 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zcwgk" podStartSLOduration=66.827691584 podStartE2EDuration="2m55.431243162s" podCreationTimestamp="2026-01-28 18:15:50 +0000 UTC" firstStartedPulling="2026-01-28 18:15:52.412135849 +0000 UTC m=+163.238698670" lastFinishedPulling="2026-01-28 18:17:41.015687417 +0000 UTC m=+271.842250248" observedRunningTime="2026-01-28 18:18:04.794441196 +0000 UTC m=+295.621004017" watchObservedRunningTime="2026-01-28 18:18:45.431243162 +0000 UTC m=+336.257805983" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.431427 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tkbjb" podStartSLOduration=64.503243063 podStartE2EDuration="2m58.431422307s" podCreationTimestamp="2026-01-28 18:15:47 +0000 UTC" firstStartedPulling="2026-01-28 18:15:50.228891505 +0000 UTC m=+161.055454326" lastFinishedPulling="2026-01-28 18:17:44.157070719 +0000 UTC m=+274.983633570" observedRunningTime="2026-01-28 18:18:04.705890763 +0000 UTC m=+295.532453584" watchObservedRunningTime="2026-01-28 18:18:45.431422307 +0000 UTC m=+336.257985128" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.431893 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-58qq5" podStartSLOduration=99.114942799 podStartE2EDuration="2m58.43188673s" podCreationTimestamp="2026-01-28 18:15:47 +0000 UTC" firstStartedPulling="2026-01-28 18:15:49.189814787 +0000 UTC m=+160.016377608" lastFinishedPulling="2026-01-28 18:17:08.506758708 +0000 UTC m=+239.333321539" observedRunningTime="2026-01-28 18:18:04.670851672 +0000 UTC m=+295.497414493" watchObservedRunningTime="2026-01-28 18:18:45.43188673 +0000 UTC m=+336.258449551" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.433011 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2zfzc" podStartSLOduration=62.243034209 podStartE2EDuration="2m55.433004883s" podCreationTimestamp="2026-01-28 18:15:50 +0000 UTC" firstStartedPulling="2026-01-28 18:15:52.341707937 +0000 UTC m=+163.168270758" lastFinishedPulling="2026-01-28 18:17:45.531678581 +0000 UTC m=+276.358241432" observedRunningTime="2026-01-28 18:18:04.747572725 +0000 UTC m=+295.574135546" watchObservedRunningTime="2026-01-28 18:18:45.433004883 +0000 UTC m=+336.259567694" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.434207 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" podStartSLOduration=100.434198327 podStartE2EDuration="1m40.434198327s" podCreationTimestamp="2026-01-28 18:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:10.897382314 +0000 UTC m=+301.723945145" watchObservedRunningTime="2026-01-28 18:18:45.434198327 +0000 UTC m=+336.260761148" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.434902 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=84.434895427 podStartE2EDuration="1m24.434895427s" podCreationTimestamp="2026-01-28 18:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:04.760163458 +0000 UTC m=+295.586726289" watchObservedRunningTime="2026-01-28 18:18:45.434895427 +0000 UTC m=+336.261458248" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.435181 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mkflh" podStartSLOduration=97.429433825 podStartE2EDuration="2m56.435173375s" podCreationTimestamp="2026-01-28 18:15:49 +0000 UTC" firstStartedPulling="2026-01-28 18:15:51.322142723 +0000 UTC m=+162.148705544" lastFinishedPulling="2026-01-28 18:17:10.327882263 +0000 UTC m=+241.154445094" observedRunningTime="2026-01-28 18:18:04.689774458 +0000 UTC m=+295.516337279" watchObservedRunningTime="2026-01-28 18:18:45.435173375 +0000 UTC m=+336.261736196" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.435297 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vq448" podStartSLOduration=62.793682934 podStartE2EDuration="2m56.435291869s" podCreationTimestamp="2026-01-28 18:15:49 +0000 UTC" firstStartedPulling="2026-01-28 18:15:51.335014809 +0000 UTC m=+162.161577630" lastFinishedPulling="2026-01-28 18:17:44.976623714 +0000 UTC m=+275.803186565" observedRunningTime="2026-01-28 18:18:04.825068979 +0000 UTC m=+295.651631810" watchObservedRunningTime="2026-01-28 18:18:45.435291869 +0000 UTC m=+336.261854710" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.435540 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podStartSLOduration=100.435535306 podStartE2EDuration="1m40.435535306s" podCreationTimestamp="2026-01-28 18:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:04.810352875 +0000 UTC m=+295.636915696" watchObservedRunningTime="2026-01-28 18:18:45.435535306 +0000 UTC m=+336.262098137" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.436233 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ngcsk","openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.436310 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.436338 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6"] Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.436354 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zcwgk","openshift-marketplace/community-operators-tkbjb","openshift-marketplace/certified-operators-58qq5","openshift-marketplace/community-operators-nbllw","openshift-marketplace/redhat-operators-2zfzc","openshift-marketplace/marketplace-operator-79b997595-b5wzm","openshift-marketplace/redhat-marketplace-mkflh","openshift-marketplace/redhat-marketplace-vq448"] Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.436652 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vq448" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="registry-server" containerID="cri-o://31e46ecf03175187af44eda5b4ce7d1101b0c4c1d73c57a447c29b34599240ab" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.436956 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-58qq5" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="registry-server" containerID="cri-o://01763e3cd2bd1b7e7c641c4d3e6204a47e371f36ee82046acaa6ead5f63ffa58" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.437437 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zcwgk" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="registry-server" containerID="cri-o://eece386460fc88f0d1b18e248446179390fd7a1f344e841dca3acc21b1822f34" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.437739 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" containerID="cri-o://f64a1d12ad75e551f76bff45fa2c92285d9866a9c62ac072c671399e4e78b8f6" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.438461 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2zfzc" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="registry-server" containerID="cri-o://98509779ffc57e66e6d647b66aa2cfccf18d2d4bea5c3dca3fa2e44328a38480" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.438738 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mkflh" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="registry-server" containerID="cri-o://9a773729ce7da9456028db66191225dafec61202d13d13e3c0cf77e40d3a65a1" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.438831 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nbllw" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="registry-server" containerID="cri-o://30ed9426cff32dd29f42b6c27b0db2bc04b4bceebc9ee807228b14314c6b1d45" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.438439 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tkbjb" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="registry-server" containerID="cri-o://3d8cc26a1796f2bc2a7c499cb4517a2ba0d12df76aaa21278ad3e99d353f0c68" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.463667 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.524989 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=40.524960982 podStartE2EDuration="40.524960982s" podCreationTimestamp="2026-01-28 18:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:45.521434331 +0000 UTC m=+336.347997142" watchObservedRunningTime="2026-01-28 18:18:45.524960982 +0000 UTC m=+336.351523823" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.580674 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6"] Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.675382 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5"] Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.675700 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" containerID="cri-o://c20541f2a2b39f6f832606efb9edd000b3514c07a50e47d18005696fc64446ca" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.780430 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.823470 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.119729 4985 generic.go:334] "Generic (PLEG): container finished" podID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerID="30ed9426cff32dd29f42b6c27b0db2bc04b4bceebc9ee807228b14314c6b1d45" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.119942 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerDied","Data":"30ed9426cff32dd29f42b6c27b0db2bc04b4bceebc9ee807228b14314c6b1d45"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.122832 4985 generic.go:334] "Generic (PLEG): container finished" podID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerID="01763e3cd2bd1b7e7c641c4d3e6204a47e371f36ee82046acaa6ead5f63ffa58" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.122891 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58qq5" event={"ID":"ee77ca55-8cd0-4401-afec-9817fee5f6bb","Type":"ContainerDied","Data":"01763e3cd2bd1b7e7c641c4d3e6204a47e371f36ee82046acaa6ead5f63ffa58"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.126400 4985 generic.go:334] "Generic (PLEG): container finished" podID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerID="9a773729ce7da9456028db66191225dafec61202d13d13e3c0cf77e40d3a65a1" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.126453 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkflh" event={"ID":"d797afdd-19c6-45ed-81c8-5fa31175e121","Type":"ContainerDied","Data":"9a773729ce7da9456028db66191225dafec61202d13d13e3c0cf77e40d3a65a1"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.129031 4985 generic.go:334] "Generic (PLEG): container finished" podID="478dee72-717a-448e-b14d-15d600c82eb5" containerID="98509779ffc57e66e6d647b66aa2cfccf18d2d4bea5c3dca3fa2e44328a38480" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.129094 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerDied","Data":"98509779ffc57e66e6d647b66aa2cfccf18d2d4bea5c3dca3fa2e44328a38480"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.131748 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5746676d8-2r8p5_e5f99d20-5afa-4144-b66e-9198c1d6c66d/route-controller-manager/0.log" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.131797 4985 generic.go:334] "Generic (PLEG): container finished" podID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerID="c20541f2a2b39f6f832606efb9edd000b3514c07a50e47d18005696fc64446ca" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.131853 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" event={"ID":"e5f99d20-5afa-4144-b66e-9198c1d6c66d","Type":"ContainerDied","Data":"c20541f2a2b39f6f832606efb9edd000b3514c07a50e47d18005696fc64446ca"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.131892 4985 scope.go:117] "RemoveContainer" containerID="84b3d1329602db518e01bb880483420a7b93445de8d4de35994516e44034e79f" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.135315 4985 generic.go:334] "Generic (PLEG): container finished" podID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerID="eece386460fc88f0d1b18e248446179390fd7a1f344e841dca3acc21b1822f34" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.135408 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerDied","Data":"eece386460fc88f0d1b18e248446179390fd7a1f344e841dca3acc21b1822f34"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.137738 4985 generic.go:334] "Generic (PLEG): container finished" podID="bebbf794-5459-4a75-bff1-92b7551d4784" containerID="31e46ecf03175187af44eda5b4ce7d1101b0c4c1d73c57a447c29b34599240ab" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.137801 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vq448" event={"ID":"bebbf794-5459-4a75-bff1-92b7551d4784","Type":"ContainerDied","Data":"31e46ecf03175187af44eda5b4ce7d1101b0c4c1d73c57a447c29b34599240ab"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.139089 4985 generic.go:334] "Generic (PLEG): container finished" podID="7b3b0534-3356-446a-91e8-dae980c402db" containerID="f64a1d12ad75e551f76bff45fa2c92285d9866a9c62ac072c671399e4e78b8f6" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.139158 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" event={"ID":"7b3b0534-3356-446a-91e8-dae980c402db","Type":"ContainerDied","Data":"f64a1d12ad75e551f76bff45fa2c92285d9866a9c62ac072c671399e4e78b8f6"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.140983 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerID="3d8cc26a1796f2bc2a7c499cb4517a2ba0d12df76aaa21278ad3e99d353f0c68" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.141234 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkbjb" event={"ID":"4bec6c8f-9678-463c-9e09-5b8e362f2f1b","Type":"ContainerDied","Data":"3d8cc26a1796f2bc2a7c499cb4517a2ba0d12df76aaa21278ad3e99d353f0c68"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.141364 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" containerName="controller-manager" containerID="cri-o://a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1" gracePeriod=30 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.318497 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.368207 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.390451 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.521857 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f99d20-5afa-4144-b66e-9198c1d6c66d-serving-cert\") pod \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.521954 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzrfx\" (UniqueName: \"kubernetes.io/projected/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-kube-api-access-rzrfx\") pod \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.521985 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-client-ca\") pod \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.522003 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-config\") pod \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.522053 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9lxb\" (UniqueName: \"kubernetes.io/projected/e5f99d20-5afa-4144-b66e-9198c1d6c66d-kube-api-access-q9lxb\") pod \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.522114 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-utilities\") pod \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.522157 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-catalog-content\") pod \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.522843 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.523214 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-utilities" (OuterVolumeSpecName: "utilities") pod "b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" (UID: "b3c2ecc0-c6a6-468b-bdcf-e84c2831a580"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.523272 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-client-ca" (OuterVolumeSpecName: "client-ca") pod "e5f99d20-5afa-4144-b66e-9198c1d6c66d" (UID: "e5f99d20-5afa-4144-b66e-9198c1d6c66d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.523343 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-config" (OuterVolumeSpecName: "config") pod "e5f99d20-5afa-4144-b66e-9198c1d6c66d" (UID: "e5f99d20-5afa-4144-b66e-9198c1d6c66d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.530967 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-kube-api-access-rzrfx" (OuterVolumeSpecName: "kube-api-access-rzrfx") pod "b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" (UID: "b3c2ecc0-c6a6-468b-bdcf-e84c2831a580"). InnerVolumeSpecName "kube-api-access-rzrfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.531030 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f99d20-5afa-4144-b66e-9198c1d6c66d-kube-api-access-q9lxb" (OuterVolumeSpecName: "kube-api-access-q9lxb") pod "e5f99d20-5afa-4144-b66e-9198c1d6c66d" (UID: "e5f99d20-5afa-4144-b66e-9198c1d6c66d"). InnerVolumeSpecName "kube-api-access-q9lxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.533786 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.534175 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f99d20-5afa-4144-b66e-9198c1d6c66d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e5f99d20-5afa-4144-b66e-9198c1d6c66d" (UID: "e5f99d20-5afa-4144-b66e-9198c1d6c66d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.548153 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.597274 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" (UID: "b3c2ecc0-c6a6-468b-bdcf-e84c2831a580"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623008 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-utilities\") pod \"bebbf794-5459-4a75-bff1-92b7551d4784\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623106 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-catalog-content\") pod \"bebbf794-5459-4a75-bff1-92b7551d4784\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623207 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d86ls\" (UniqueName: \"kubernetes.io/projected/bebbf794-5459-4a75-bff1-92b7551d4784-kube-api-access-d86ls\") pod \"bebbf794-5459-4a75-bff1-92b7551d4784\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623542 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623560 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623573 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f99d20-5afa-4144-b66e-9198c1d6c66d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623585 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzrfx\" (UniqueName: \"kubernetes.io/projected/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-kube-api-access-rzrfx\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623596 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623608 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623620 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9lxb\" (UniqueName: \"kubernetes.io/projected/e5f99d20-5afa-4144-b66e-9198c1d6c66d-kube-api-access-q9lxb\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.625007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-utilities" (OuterVolumeSpecName: "utilities") pod "bebbf794-5459-4a75-bff1-92b7551d4784" (UID: "bebbf794-5459-4a75-bff1-92b7551d4784"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.627695 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebbf794-5459-4a75-bff1-92b7551d4784-kube-api-access-d86ls" (OuterVolumeSpecName: "kube-api-access-d86ls") pod "bebbf794-5459-4a75-bff1-92b7551d4784" (UID: "bebbf794-5459-4a75-bff1-92b7551d4784"). InnerVolumeSpecName "kube-api-access-d86ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.659769 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bebbf794-5459-4a75-bff1-92b7551d4784" (UID: "bebbf794-5459-4a75-bff1-92b7551d4784"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.663637 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.668903 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.672055 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.713902 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.715163 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.716328 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.725162 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-catalog-content\") pod \"d797afdd-19c6-45ed-81c8-5fa31175e121\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.725213 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89h9m\" (UniqueName: \"kubernetes.io/projected/d797afdd-19c6-45ed-81c8-5fa31175e121-kube-api-access-89h9m\") pod \"d797afdd-19c6-45ed-81c8-5fa31175e121\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.725428 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-utilities\") pod \"d797afdd-19c6-45ed-81c8-5fa31175e121\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.725802 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.725827 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.725838 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d86ls\" (UniqueName: \"kubernetes.io/projected/bebbf794-5459-4a75-bff1-92b7551d4784-kube-api-access-d86ls\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.728793 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-utilities" (OuterVolumeSpecName: "utilities") pod "d797afdd-19c6-45ed-81c8-5fa31175e121" (UID: "d797afdd-19c6-45ed-81c8-5fa31175e121"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.732736 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d797afdd-19c6-45ed-81c8-5fa31175e121-kube-api-access-89h9m" (OuterVolumeSpecName: "kube-api-access-89h9m") pod "d797afdd-19c6-45ed-81c8-5fa31175e121" (UID: "d797afdd-19c6-45ed-81c8-5fa31175e121"). InnerVolumeSpecName "kube-api-access-89h9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.755351 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d797afdd-19c6-45ed-81c8-5fa31175e121" (UID: "d797afdd-19c6-45ed-81c8-5fa31175e121"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.791434 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826683 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b84g\" (UniqueName: \"kubernetes.io/projected/7b3b0534-3356-446a-91e8-dae980c402db-kube-api-access-2b84g\") pod \"7b3b0534-3356-446a-91e8-dae980c402db\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826731 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpdsv\" (UniqueName: \"kubernetes.io/projected/478dee72-717a-448e-b14d-15d600c82eb5-kube-api-access-wpdsv\") pod \"478dee72-717a-448e-b14d-15d600c82eb5\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826764 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-utilities\") pod \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826802 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-catalog-content\") pod \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826820 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-utilities\") pod \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826846 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-proxy-ca-bundles\") pod \"eefb5804-82d5-488f-a5c4-5473107ffbcd\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826868 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99vxj\" (UniqueName: \"kubernetes.io/projected/ee77ca55-8cd0-4401-afec-9817fee5f6bb-kube-api-access-99vxj\") pod \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826886 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-utilities\") pod \"478dee72-717a-448e-b14d-15d600c82eb5\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826907 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-catalog-content\") pod \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.827752 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-utilities" (OuterVolumeSpecName: "utilities") pod "4bec6c8f-9678-463c-9e09-5b8e362f2f1b" (UID: "4bec6c8f-9678-463c-9e09-5b8e362f2f1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.827941 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-utilities" (OuterVolumeSpecName: "utilities") pod "478dee72-717a-448e-b14d-15d600c82eb5" (UID: "478dee72-717a-448e-b14d-15d600c82eb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.828051 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eefb5804-82d5-488f-a5c4-5473107ffbcd" (UID: "eefb5804-82d5-488f-a5c4-5473107ffbcd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.829787 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee77ca55-8cd0-4401-afec-9817fee5f6bb-kube-api-access-99vxj" (OuterVolumeSpecName: "kube-api-access-99vxj") pod "ee77ca55-8cd0-4401-afec-9817fee5f6bb" (UID: "ee77ca55-8cd0-4401-afec-9817fee5f6bb"). InnerVolumeSpecName "kube-api-access-99vxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.830657 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/478dee72-717a-448e-b14d-15d600c82eb5-kube-api-access-wpdsv" (OuterVolumeSpecName: "kube-api-access-wpdsv") pod "478dee72-717a-448e-b14d-15d600c82eb5" (UID: "478dee72-717a-448e-b14d-15d600c82eb5"). InnerVolumeSpecName "kube-api-access-wpdsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.831304 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3b0534-3356-446a-91e8-dae980c402db-kube-api-access-2b84g" (OuterVolumeSpecName: "kube-api-access-2b84g") pod "7b3b0534-3356-446a-91e8-dae980c402db" (UID: "7b3b0534-3356-446a-91e8-dae980c402db"). InnerVolumeSpecName "kube-api-access-2b84g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.832349 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-catalog-content\") pod \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.832391 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkcw4\" (UniqueName: \"kubernetes.io/projected/eefb5804-82d5-488f-a5c4-5473107ffbcd-kube-api-access-hkcw4\") pod \"eefb5804-82d5-488f-a5c4-5473107ffbcd\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.832446 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-catalog-content\") pod \"478dee72-717a-448e-b14d-15d600c82eb5\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.832482 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eefb5804-82d5-488f-a5c4-5473107ffbcd-serving-cert\") pod \"eefb5804-82d5-488f-a5c4-5473107ffbcd\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.832556 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-utilities\") pod \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.833542 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-utilities" (OuterVolumeSpecName: "utilities") pod "ee77ca55-8cd0-4401-afec-9817fee5f6bb" (UID: "ee77ca55-8cd0-4401-afec-9817fee5f6bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.833916 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj4fx\" (UniqueName: \"kubernetes.io/projected/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-kube-api-access-kj4fx\") pod \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.834383 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-config\") pod \"eefb5804-82d5-488f-a5c4-5473107ffbcd\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.835305 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-config" (OuterVolumeSpecName: "config") pod "eefb5804-82d5-488f-a5c4-5473107ffbcd" (UID: "eefb5804-82d5-488f-a5c4-5473107ffbcd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.835231 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-operator-metrics\") pod \"7b3b0534-3356-446a-91e8-dae980c402db\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.835385 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-client-ca\") pod \"eefb5804-82d5-488f-a5c4-5473107ffbcd\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.835770 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-trusted-ca\") pod \"7b3b0534-3356-446a-91e8-dae980c402db\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.835797 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn4jc\" (UniqueName: \"kubernetes.io/projected/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-kube-api-access-gn4jc\") pod \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836523 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7b3b0534-3356-446a-91e8-dae980c402db" (UID: "7b3b0534-3356-446a-91e8-dae980c402db"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836533 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836568 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836581 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836593 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b84g\" (UniqueName: \"kubernetes.io/projected/7b3b0534-3356-446a-91e8-dae980c402db-kube-api-access-2b84g\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836604 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpdsv\" (UniqueName: \"kubernetes.io/projected/478dee72-717a-448e-b14d-15d600c82eb5-kube-api-access-wpdsv\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836615 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836623 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836633 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99vxj\" (UniqueName: \"kubernetes.io/projected/ee77ca55-8cd0-4401-afec-9817fee5f6bb-kube-api-access-99vxj\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836644 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836653 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836664 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89h9m\" (UniqueName: \"kubernetes.io/projected/d797afdd-19c6-45ed-81c8-5fa31175e121-kube-api-access-89h9m\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.837163 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-kube-api-access-kj4fx" (OuterVolumeSpecName: "kube-api-access-kj4fx") pod "4bec6c8f-9678-463c-9e09-5b8e362f2f1b" (UID: "4bec6c8f-9678-463c-9e09-5b8e362f2f1b"). InnerVolumeSpecName "kube-api-access-kj4fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.837321 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-client-ca" (OuterVolumeSpecName: "client-ca") pod "eefb5804-82d5-488f-a5c4-5473107ffbcd" (UID: "eefb5804-82d5-488f-a5c4-5473107ffbcd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.838146 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eefb5804-82d5-488f-a5c4-5473107ffbcd-kube-api-access-hkcw4" (OuterVolumeSpecName: "kube-api-access-hkcw4") pod "eefb5804-82d5-488f-a5c4-5473107ffbcd" (UID: "eefb5804-82d5-488f-a5c4-5473107ffbcd"). InnerVolumeSpecName "kube-api-access-hkcw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.839685 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7b3b0534-3356-446a-91e8-dae980c402db" (UID: "7b3b0534-3356-446a-91e8-dae980c402db"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.839992 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eefb5804-82d5-488f-a5c4-5473107ffbcd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eefb5804-82d5-488f-a5c4-5473107ffbcd" (UID: "eefb5804-82d5-488f-a5c4-5473107ffbcd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.840667 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-kube-api-access-gn4jc" (OuterVolumeSpecName: "kube-api-access-gn4jc") pod "f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" (UID: "f17410ee-fc07-4e6c-8262-d3dad9ca4a5d"). InnerVolumeSpecName "kube-api-access-gn4jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.843885 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-utilities" (OuterVolumeSpecName: "utilities") pod "f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" (UID: "f17410ee-fc07-4e6c-8262-d3dad9ca4a5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.886785 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bec6c8f-9678-463c-9e09-5b8e362f2f1b" (UID: "4bec6c8f-9678-463c-9e09-5b8e362f2f1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.889720 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee77ca55-8cd0-4401-afec-9817fee5f6bb" (UID: "ee77ca55-8cd0-4401-afec-9817fee5f6bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.937970 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938007 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkcw4\" (UniqueName: \"kubernetes.io/projected/eefb5804-82d5-488f-a5c4-5473107ffbcd-kube-api-access-hkcw4\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938023 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eefb5804-82d5-488f-a5c4-5473107ffbcd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938036 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj4fx\" (UniqueName: \"kubernetes.io/projected/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-kube-api-access-kj4fx\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938045 4985 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938054 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938063 4985 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938075 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn4jc\" (UniqueName: \"kubernetes.io/projected/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-kube-api-access-gn4jc\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938084 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938096 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.958168 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.974232 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" (UID: "f17410ee-fc07-4e6c-8262-d3dad9ca4a5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.979748 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "478dee72-717a-448e-b14d-15d600c82eb5" (UID: "478dee72-717a-448e-b14d-15d600c82eb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.027058 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.039024 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.039321 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.057752 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.150591 4985 generic.go:334] "Generic (PLEG): container finished" podID="eefb5804-82d5-488f-a5c4-5473107ffbcd" containerID="a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1" exitCode=0 Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.150667 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.151479 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" event={"ID":"eefb5804-82d5-488f-a5c4-5473107ffbcd","Type":"ContainerDied","Data":"a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.151665 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" event={"ID":"eefb5804-82d5-488f-a5c4-5473107ffbcd","Type":"ContainerDied","Data":"5b05bb1b67bf56c71462a79b529ac2543e0047903c359f6e9fac94a35e5f7aac"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.151766 4985 scope.go:117] "RemoveContainer" containerID="a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.154117 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.154568 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" event={"ID":"7b3b0534-3356-446a-91e8-dae980c402db","Type":"ContainerDied","Data":"1e7f0e57b01f1d7574c6a758c09ab0d8248fafcd79d2a77c1cd5931c1c715640"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.172467 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerDied","Data":"fee5ad9c634324fb795c0ec18b20b982cec13ce8646e5a41d3259fd33ab8724c"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.172573 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.181607 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58qq5" event={"ID":"ee77ca55-8cd0-4401-afec-9817fee5f6bb","Type":"ContainerDied","Data":"29cf66044b42b3771161b4b736214738baedd3db9a4eab25aec806dff09290a6"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.181793 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.184112 4985 scope.go:117] "RemoveContainer" containerID="a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1" Jan 28 18:18:47 crc kubenswrapper[4985]: E0128 18:18:47.184514 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1\": container with ID starting with a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1 not found: ID does not exist" containerID="a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.184560 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1"} err="failed to get container status \"a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1\": rpc error: code = NotFound desc = could not find container \"a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1\": container with ID starting with a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1 not found: ID does not exist" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.184594 4985 scope.go:117] "RemoveContainer" containerID="f64a1d12ad75e551f76bff45fa2c92285d9866a9c62ac072c671399e4e78b8f6" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.192148 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerDied","Data":"687d51d9587f9c808e73f6dce3d7fb729d7c957935ab306ab4a9c9ab274f7f6f"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.193189 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.196186 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" event={"ID":"e5f99d20-5afa-4144-b66e-9198c1d6c66d","Type":"ContainerDied","Data":"61b704f839468f67ac0c3f15e67acd552ecf612f482f58ba44a89c002ae8c45b"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.196221 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.199230 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkflh" event={"ID":"d797afdd-19c6-45ed-81c8-5fa31175e121","Type":"ContainerDied","Data":"b846c4733fcd4ae67ec3f2920b60c675130ebbfa81d38792b482dedce235cc4c"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.199525 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.203594 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.203091 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerDied","Data":"2a41be352376fbadb1f7291b4affc279d9d298821bb817d8661c11256745bd0d"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.207506 4985 scope.go:117] "RemoveContainer" containerID="30ed9426cff32dd29f42b6c27b0db2bc04b4bceebc9ee807228b14314c6b1d45" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.213206 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.220509 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.226548 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b5wzm"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.231485 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vq448" event={"ID":"bebbf794-5459-4a75-bff1-92b7551d4784","Type":"ContainerDied","Data":"4227c1ef4517986db5b63f69f417525b1efc3dddfa056b58023dfaf2602681c9"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.231629 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.242769 4985 scope.go:117] "RemoveContainer" containerID="ea88d0096240b8b1ce3a53612acc27a9069f84f2e4c034995d9d80ba5534c382" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.243748 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkbjb" event={"ID":"4bec6c8f-9678-463c-9e09-5b8e362f2f1b","Type":"ContainerDied","Data":"7de4f851d6fd3b3bdf2435ffb6090fbd2d50bbda34ffd7c0a08f88549a7af86b"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.243901 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.254862 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b5wzm"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.287496 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b3b0534-3356-446a-91e8-dae980c402db" path="/var/lib/kubelet/pods/7b3b0534-3356-446a-91e8-dae980c402db/volumes" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.288238 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" path="/var/lib/kubelet/pods/eefb5804-82d5-488f-a5c4-5473107ffbcd/volumes" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.288825 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1a5336-5c99-49fa-bb89-311781866770" path="/var/lib/kubelet/pods/ff1a5336-5c99-49fa-bb89-311781866770/volumes" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.291369 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zcwgk"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.291397 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zcwgk"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.292126 4985 scope.go:117] "RemoveContainer" containerID="5959b03d9788b40f0a702f2c357697b3ecb07a0cda1a9c0b368fd63267cd0bea" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.294847 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.297359 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.302086 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.310727 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58qq5"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.313471 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-58qq5"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.322570 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkflh"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.323977 4985 scope.go:117] "RemoveContainer" containerID="01763e3cd2bd1b7e7c641c4d3e6204a47e371f36ee82046acaa6ead5f63ffa58" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.328027 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkflh"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.334192 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbllw"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.338761 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nbllw"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.341280 4985 scope.go:117] "RemoveContainer" containerID="5ae5d10976e7c26eb6213f430d17c638f8547abe24f44e7063a7dba954835ef4" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.344862 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2zfzc"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.348057 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2zfzc"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.356393 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkbjb"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.358956 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tkbjb"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.360003 4985 scope.go:117] "RemoveContainer" containerID="f89df29bdb5f4a1ac1d8a46bc1cdba1d48b8e3013145698fb6cdebd84b29470e" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.365782 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vq448"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.369598 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vq448"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.381700 4985 scope.go:117] "RemoveContainer" containerID="98509779ffc57e66e6d647b66aa2cfccf18d2d4bea5c3dca3fa2e44328a38480" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.399143 4985 scope.go:117] "RemoveContainer" containerID="c6a6370de55c9f1d322d443a680768dd95b5a50ccc8cfbead3f597f6cb81b47b" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.421565 4985 scope.go:117] "RemoveContainer" containerID="5673793a26abba26b8f6d32fd5a5358bd49bc89bef0867e3813c049e8ce5af23" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.440118 4985 scope.go:117] "RemoveContainer" containerID="c20541f2a2b39f6f832606efb9edd000b3514c07a50e47d18005696fc64446ca" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.454373 4985 scope.go:117] "RemoveContainer" containerID="9a773729ce7da9456028db66191225dafec61202d13d13e3c0cf77e40d3a65a1" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.477091 4985 scope.go:117] "RemoveContainer" containerID="08c2afc11e237eab84a8f7dfaa5b0598297222c01564bf4921e004a1b405af84" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.490648 4985 scope.go:117] "RemoveContainer" containerID="1c1dfa1718d5bb120e659769c80766e3c5cedbd440f581ae9a47ced34819aecd" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.503545 4985 scope.go:117] "RemoveContainer" containerID="eece386460fc88f0d1b18e248446179390fd7a1f344e841dca3acc21b1822f34" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.512612 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.517428 4985 scope.go:117] "RemoveContainer" containerID="82b69880adf61999e4575782c5ecaafe22c81d0a0e17bab967aa245eeb683a6c" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.532053 4985 scope.go:117] "RemoveContainer" containerID="232f8967da98b027f9bf4b5329e389ea4efabb6b13f4e9043541624ffe8ba02b" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.544961 4985 scope.go:117] "RemoveContainer" containerID="31e46ecf03175187af44eda5b4ce7d1101b0c4c1d73c57a447c29b34599240ab" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.563044 4985 scope.go:117] "RemoveContainer" containerID="c3c7c834b59dec9afe12ae5cb4e24ce5d7fb7d283ff22d3d168e71ce368d578d" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.587333 4985 scope.go:117] "RemoveContainer" containerID="e42228c4ddd411e6182ff6bcd41d0e27a2e8b74487dc7087bd1ccdb69c1e91bf" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.603610 4985 scope.go:117] "RemoveContainer" containerID="3d8cc26a1796f2bc2a7c499cb4517a2ba0d12df76aaa21278ad3e99d353f0c68" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.609682 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.619318 4985 scope.go:117] "RemoveContainer" containerID="f66d90e90c24d7eaca4eeddb8684aee625dffff1f85b1b4fa72af4b5c206bbee" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.635002 4985 scope.go:117] "RemoveContainer" containerID="6fbcabfceffdf85763f4008a949c3b5ecf075282566d7602a9169724a8470662" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.728776 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.770703 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.773025 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.804342 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.830113 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.982900 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 18:18:48 crc kubenswrapper[4985]: I0128 18:18:48.276954 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 18:18:48 crc kubenswrapper[4985]: I0128 18:18:48.850976 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 18:18:48 crc kubenswrapper[4985]: I0128 18:18:48.951607 4985 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.061166 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.121242 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.197874 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hvkcw"] Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198181 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198202 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198215 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198222 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198230 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198236 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198270 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" containerName="controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198277 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" containerName="controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198287 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198293 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198299 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198305 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198311 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198317 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198327 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198334 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198341 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198349 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198359 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198365 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198373 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198379 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198387 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198393 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198401 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198408 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198420 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198426 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198436 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198443 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198450 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198457 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198465 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198472 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198480 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198486 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198495 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198501 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198512 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198519 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198525 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198543 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198550 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198556 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198565 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198572 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198579 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198585 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198593 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" containerName="installer" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198600 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" containerName="installer" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198610 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198616 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198624 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198630 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198638 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198645 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198652 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198659 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198743 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198754 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198762 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198769 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198775 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198781 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198790 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" containerName="controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198799 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198808 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198813 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" containerName="installer" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198821 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198828 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198836 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.199290 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.202468 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.202566 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.202814 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt"] Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.203680 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.204698 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.205175 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.206791 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.206938 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.206985 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.206999 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.207140 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.207796 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.212073 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.212225 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hvkcw"] Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.216486 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt"] Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.269818 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-config\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.269958 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c43298b-f494-48e0-b307-61e702afc5ef-serving-cert\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.270001 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4845499d-139f-4839-9f9f-4d77c7f0ae37-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.270094 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cnps\" (UniqueName: \"kubernetes.io/projected/4845499d-139f-4839-9f9f-4d77c7f0ae37-kube-api-access-4cnps\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.270160 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4845499d-139f-4839-9f9f-4d77c7f0ae37-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.270298 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-client-ca\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.270359 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqgr4\" (UniqueName: \"kubernetes.io/projected/7c43298b-f494-48e0-b307-61e702afc5ef-kube-api-access-xqgr4\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.272859 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="478dee72-717a-448e-b14d-15d600c82eb5" path="/var/lib/kubelet/pods/478dee72-717a-448e-b14d-15d600c82eb5/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.273982 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" path="/var/lib/kubelet/pods/4bec6c8f-9678-463c-9e09-5b8e362f2f1b/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.274743 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" path="/var/lib/kubelet/pods/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.275865 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" path="/var/lib/kubelet/pods/bebbf794-5459-4a75-bff1-92b7551d4784/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.276459 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" path="/var/lib/kubelet/pods/d797afdd-19c6-45ed-81c8-5fa31175e121/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.277494 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" path="/var/lib/kubelet/pods/e5f99d20-5afa-4144-b66e-9198c1d6c66d/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.278070 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" path="/var/lib/kubelet/pods/ee77ca55-8cd0-4401-afec-9817fee5f6bb/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.278614 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" path="/var/lib/kubelet/pods/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.328080 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.371839 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c43298b-f494-48e0-b307-61e702afc5ef-serving-cert\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.371986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cnps\" (UniqueName: \"kubernetes.io/projected/4845499d-139f-4839-9f9f-4d77c7f0ae37-kube-api-access-4cnps\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.372011 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4845499d-139f-4839-9f9f-4d77c7f0ae37-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.372032 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4845499d-139f-4839-9f9f-4d77c7f0ae37-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.372058 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-client-ca\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.372094 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqgr4\" (UniqueName: \"kubernetes.io/projected/7c43298b-f494-48e0-b307-61e702afc5ef-kube-api-access-xqgr4\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.372115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-config\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.374214 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-config\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.374867 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-client-ca\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.376768 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4845499d-139f-4839-9f9f-4d77c7f0ae37-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.389944 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c43298b-f494-48e0-b307-61e702afc5ef-serving-cert\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.390478 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4845499d-139f-4839-9f9f-4d77c7f0ae37-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.395796 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqgr4\" (UniqueName: \"kubernetes.io/projected/7c43298b-f494-48e0-b307-61e702afc5ef-kube-api-access-xqgr4\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.396063 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cnps\" (UniqueName: \"kubernetes.io/projected/4845499d-139f-4839-9f9f-4d77c7f0ae37-kube-api-access-4cnps\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.442051 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.458056 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.556905 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.570505 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.731413 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.770657 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt"] Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.892159 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.911394 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.018804 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hvkcw"] Jan 28 18:18:50 crc kubenswrapper[4985]: W0128 18:18:50.023314 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4845499d_139f_4839_9f9f_4d77c7f0ae37.slice/crio-ac463c2c0bf66adfc9b65f50c82aeb322d76085e4cecf33f4cc8262707f86f48 WatchSource:0}: Error finding container ac463c2c0bf66adfc9b65f50c82aeb322d76085e4cecf33f4cc8262707f86f48: Status 404 returned error can't find the container with id ac463c2c0bf66adfc9b65f50c82aeb322d76085e4cecf33f4cc8262707f86f48 Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.049971 4985 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.050386 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ab7d18f55611d02a03d62a6ebace75ed35b7b1a319a4367884bd6c2504dce01f" gracePeriod=5 Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.225926 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.275837 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" event={"ID":"4845499d-139f-4839-9f9f-4d77c7f0ae37","Type":"ContainerStarted","Data":"dcd1b7b2c9b099a64b97b202bb9f7fd3e0b1bcb3e84ef11fdc826b0963e66089"} Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.275894 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" event={"ID":"4845499d-139f-4839-9f9f-4d77c7f0ae37","Type":"ContainerStarted","Data":"ac463c2c0bf66adfc9b65f50c82aeb322d76085e4cecf33f4cc8262707f86f48"} Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.276363 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.278049 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" start-of-body= Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.278128 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.280170 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" event={"ID":"7c43298b-f494-48e0-b307-61e702afc5ef","Type":"ContainerStarted","Data":"176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795"} Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.280207 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" event={"ID":"7c43298b-f494-48e0-b307-61e702afc5ef","Type":"ContainerStarted","Data":"63bfaff3938f44bf1190a7307ea884168e52b2cc5fe98c3d56e1af05c046f6ea"} Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.280691 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.294318 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.305308 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podStartSLOduration=6.30528515 podStartE2EDuration="6.30528515s" podCreationTimestamp="2026-01-28 18:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:50.304029874 +0000 UTC m=+341.130592705" watchObservedRunningTime="2026-01-28 18:18:50.30528515 +0000 UTC m=+341.131847971" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.623438 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.894338 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.087119 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" podStartSLOduration=6.087096326 podStartE2EDuration="6.087096326s" podCreationTimestamp="2026-01-28 18:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:50.331522526 +0000 UTC m=+341.158085357" watchObservedRunningTime="2026-01-28 18:18:51.087096326 +0000 UTC m=+341.913659147" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.089528 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8658c89568-xqc66"] Jan 28 18:18:51 crc kubenswrapper[4985]: E0128 18:18:51.089795 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.089818 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.089956 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.090445 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.093125 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.093530 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.093782 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.093820 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.096776 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.097366 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.103017 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.103855 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-proxy-ca-bundles\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.103901 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-client-ca\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.104008 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-config\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.104084 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj7rm\" (UniqueName: \"kubernetes.io/projected/fd33a411-202c-41c4-a6b0-cf49ca4945a0-kube-api-access-qj7rm\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.104136 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd33a411-202c-41c4-a6b0-cf49ca4945a0-serving-cert\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.105489 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8658c89568-xqc66"] Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.205135 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-client-ca\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.205997 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-config\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.206079 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj7rm\" (UniqueName: \"kubernetes.io/projected/fd33a411-202c-41c4-a6b0-cf49ca4945a0-kube-api-access-qj7rm\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.206155 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd33a411-202c-41c4-a6b0-cf49ca4945a0-serving-cert\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.206195 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-proxy-ca-bundles\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.207572 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-config\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.208357 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-client-ca\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.209104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-proxy-ca-bundles\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.218502 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd33a411-202c-41c4-a6b0-cf49ca4945a0-serving-cert\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.227534 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj7rm\" (UniqueName: \"kubernetes.io/projected/fd33a411-202c-41c4-a6b0-cf49ca4945a0-kube-api-access-qj7rm\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.250496 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.261138 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.288887 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.434680 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.503656 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.540444 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.642425 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8658c89568-xqc66"] Jan 28 18:18:51 crc kubenswrapper[4985]: W0128 18:18:51.648914 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd33a411_202c_41c4_a6b0_cf49ca4945a0.slice/crio-62677165cf1b3b0e28ffd442affb496e27e4262d996055f35c45380f63073113 WatchSource:0}: Error finding container 62677165cf1b3b0e28ffd442affb496e27e4262d996055f35c45380f63073113: Status 404 returned error can't find the container with id 62677165cf1b3b0e28ffd442affb496e27e4262d996055f35c45380f63073113 Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.685099 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.867303 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.209680 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.293018 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" event={"ID":"fd33a411-202c-41c4-a6b0-cf49ca4945a0","Type":"ContainerStarted","Data":"b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85"} Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.293056 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" event={"ID":"fd33a411-202c-41c4-a6b0-cf49ca4945a0","Type":"ContainerStarted","Data":"62677165cf1b3b0e28ffd442affb496e27e4262d996055f35c45380f63073113"} Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.293627 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.297704 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.313888 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" podStartSLOduration=7.31386885 podStartE2EDuration="7.31386885s" podCreationTimestamp="2026-01-28 18:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:52.31002445 +0000 UTC m=+343.136587281" watchObservedRunningTime="2026-01-28 18:18:52.31386885 +0000 UTC m=+343.140431671" Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.637671 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.850395 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 18:18:53 crc kubenswrapper[4985]: I0128 18:18:53.251857 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 18:18:53 crc kubenswrapper[4985]: I0128 18:18:53.292817 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 18:18:53 crc kubenswrapper[4985]: I0128 18:18:53.294651 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 18:18:53 crc kubenswrapper[4985]: I0128 18:18:53.716368 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 18:18:53 crc kubenswrapper[4985]: I0128 18:18:53.774603 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 18:18:54 crc kubenswrapper[4985]: I0128 18:18:54.111084 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 18:18:54 crc kubenswrapper[4985]: I0128 18:18:54.407471 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 18:18:54 crc kubenswrapper[4985]: I0128 18:18:54.456898 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 18:18:54 crc kubenswrapper[4985]: I0128 18:18:54.458445 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.186456 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.254832 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.285686 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.297761 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.307965 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.308013 4985 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ab7d18f55611d02a03d62a6ebace75ed35b7b1a319a4367884bd6c2504dce01f" exitCode=137 Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.382567 4985 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.518968 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.593983 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.649041 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.649117 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771042 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771110 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771158 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771226 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771398 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771276 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771327 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771359 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771462 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771731 4985 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771761 4985 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771772 4985 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771784 4985 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.780835 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.827374 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.873174 4985 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:56 crc kubenswrapper[4985]: I0128 18:18:56.275034 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 18:18:56 crc kubenswrapper[4985]: I0128 18:18:56.316037 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 18:18:56 crc kubenswrapper[4985]: I0128 18:18:56.316117 4985 scope.go:117] "RemoveContainer" containerID="ab7d18f55611d02a03d62a6ebace75ed35b7b1a319a4367884bd6c2504dce01f" Jan 28 18:18:56 crc kubenswrapper[4985]: I0128 18:18:56.316181 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.264996 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.270128 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.270289 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.270428 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.283167 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.283216 4985 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2b8db072-9548-45aa-92d1-61dab999c4ad" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.299253 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.299328 4985 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2b8db072-9548-45aa-92d1-61dab999c4ad" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.806374 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 18:18:58 crc kubenswrapper[4985]: I0128 18:18:58.728107 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 18:18:58 crc kubenswrapper[4985]: I0128 18:18:58.843686 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 18:18:59 crc kubenswrapper[4985]: I0128 18:18:59.196771 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 18:19:01 crc kubenswrapper[4985]: I0128 18:19:01.085827 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 18:19:03 crc kubenswrapper[4985]: I0128 18:19:03.223960 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 18:19:03 crc kubenswrapper[4985]: I0128 18:19:03.251553 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 18:19:03 crc kubenswrapper[4985]: I0128 18:19:03.917848 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 18:19:04 crc kubenswrapper[4985]: I0128 18:19:04.386884 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 18:19:04 crc kubenswrapper[4985]: I0128 18:19:04.899475 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 18:19:05 crc kubenswrapper[4985]: I0128 18:19:05.506189 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8658c89568-xqc66"] Jan 28 18:19:05 crc kubenswrapper[4985]: I0128 18:19:05.507448 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" podUID="fd33a411-202c-41c4-a6b0-cf49ca4945a0" containerName="controller-manager" containerID="cri-o://b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85" gracePeriod=30 Jan 28 18:19:05 crc kubenswrapper[4985]: I0128 18:19:05.519040 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt"] Jan 28 18:19:05 crc kubenswrapper[4985]: I0128 18:19:05.519406 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" podUID="7c43298b-f494-48e0-b307-61e702afc5ef" containerName="route-controller-manager" containerID="cri-o://176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795" gracePeriod=30 Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.031066 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.097312 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.109999 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.214852 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c43298b-f494-48e0-b307-61e702afc5ef-serving-cert\") pod \"7c43298b-f494-48e0-b307-61e702afc5ef\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.214919 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-client-ca\") pod \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.214984 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj7rm\" (UniqueName: \"kubernetes.io/projected/fd33a411-202c-41c4-a6b0-cf49ca4945a0-kube-api-access-qj7rm\") pod \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215009 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-client-ca\") pod \"7c43298b-f494-48e0-b307-61e702afc5ef\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215035 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-config\") pod \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215061 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqgr4\" (UniqueName: \"kubernetes.io/projected/7c43298b-f494-48e0-b307-61e702afc5ef-kube-api-access-xqgr4\") pod \"7c43298b-f494-48e0-b307-61e702afc5ef\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215113 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-config\") pod \"7c43298b-f494-48e0-b307-61e702afc5ef\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215137 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd33a411-202c-41c4-a6b0-cf49ca4945a0-serving-cert\") pod \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215161 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-proxy-ca-bundles\") pod \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215788 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-client-ca" (OuterVolumeSpecName: "client-ca") pod "fd33a411-202c-41c4-a6b0-cf49ca4945a0" (UID: "fd33a411-202c-41c4-a6b0-cf49ca4945a0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215796 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-client-ca" (OuterVolumeSpecName: "client-ca") pod "7c43298b-f494-48e0-b307-61e702afc5ef" (UID: "7c43298b-f494-48e0-b307-61e702afc5ef"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215919 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fd33a411-202c-41c4-a6b0-cf49ca4945a0" (UID: "fd33a411-202c-41c4-a6b0-cf49ca4945a0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215997 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-config" (OuterVolumeSpecName: "config") pod "fd33a411-202c-41c4-a6b0-cf49ca4945a0" (UID: "fd33a411-202c-41c4-a6b0-cf49ca4945a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.216483 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-config" (OuterVolumeSpecName: "config") pod "7c43298b-f494-48e0-b307-61e702afc5ef" (UID: "7c43298b-f494-48e0-b307-61e702afc5ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.221000 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c43298b-f494-48e0-b307-61e702afc5ef-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7c43298b-f494-48e0-b307-61e702afc5ef" (UID: "7c43298b-f494-48e0-b307-61e702afc5ef"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.221047 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c43298b-f494-48e0-b307-61e702afc5ef-kube-api-access-xqgr4" (OuterVolumeSpecName: "kube-api-access-xqgr4") pod "7c43298b-f494-48e0-b307-61e702afc5ef" (UID: "7c43298b-f494-48e0-b307-61e702afc5ef"). InnerVolumeSpecName "kube-api-access-xqgr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.221098 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd33a411-202c-41c4-a6b0-cf49ca4945a0-kube-api-access-qj7rm" (OuterVolumeSpecName: "kube-api-access-qj7rm") pod "fd33a411-202c-41c4-a6b0-cf49ca4945a0" (UID: "fd33a411-202c-41c4-a6b0-cf49ca4945a0"). InnerVolumeSpecName "kube-api-access-qj7rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.221481 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd33a411-202c-41c4-a6b0-cf49ca4945a0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fd33a411-202c-41c4-a6b0-cf49ca4945a0" (UID: "fd33a411-202c-41c4-a6b0-cf49ca4945a0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316167 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj7rm\" (UniqueName: \"kubernetes.io/projected/fd33a411-202c-41c4-a6b0-cf49ca4945a0-kube-api-access-qj7rm\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316250 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316279 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316292 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqgr4\" (UniqueName: \"kubernetes.io/projected/7c43298b-f494-48e0-b307-61e702afc5ef-kube-api-access-xqgr4\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316303 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316313 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd33a411-202c-41c4-a6b0-cf49ca4945a0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316324 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316334 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316344 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c43298b-f494-48e0-b307-61e702afc5ef-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.376944 4985 generic.go:334] "Generic (PLEG): container finished" podID="fd33a411-202c-41c4-a6b0-cf49ca4945a0" containerID="b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85" exitCode=0 Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.377045 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" event={"ID":"fd33a411-202c-41c4-a6b0-cf49ca4945a0","Type":"ContainerDied","Data":"b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85"} Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.377083 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" event={"ID":"fd33a411-202c-41c4-a6b0-cf49ca4945a0","Type":"ContainerDied","Data":"62677165cf1b3b0e28ffd442affb496e27e4262d996055f35c45380f63073113"} Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.377118 4985 scope.go:117] "RemoveContainer" containerID="b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.377369 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.379490 4985 generic.go:334] "Generic (PLEG): container finished" podID="7c43298b-f494-48e0-b307-61e702afc5ef" containerID="176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795" exitCode=0 Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.379558 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" event={"ID":"7c43298b-f494-48e0-b307-61e702afc5ef","Type":"ContainerDied","Data":"176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795"} Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.379600 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" event={"ID":"7c43298b-f494-48e0-b307-61e702afc5ef","Type":"ContainerDied","Data":"63bfaff3938f44bf1190a7307ea884168e52b2cc5fe98c3d56e1af05c046f6ea"} Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.379716 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.400662 4985 scope.go:117] "RemoveContainer" containerID="b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85" Jan 28 18:19:06 crc kubenswrapper[4985]: E0128 18:19:06.401245 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85\": container with ID starting with b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85 not found: ID does not exist" containerID="b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.401303 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85"} err="failed to get container status \"b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85\": rpc error: code = NotFound desc = could not find container \"b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85\": container with ID starting with b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85 not found: ID does not exist" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.401335 4985 scope.go:117] "RemoveContainer" containerID="176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.419428 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8658c89568-xqc66"] Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.421961 4985 scope.go:117] "RemoveContainer" containerID="176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795" Jan 28 18:19:06 crc kubenswrapper[4985]: E0128 18:19:06.423667 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795\": container with ID starting with 176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795 not found: ID does not exist" containerID="176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.423719 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795"} err="failed to get container status \"176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795\": rpc error: code = NotFound desc = could not find container \"176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795\": container with ID starting with 176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795 not found: ID does not exist" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.427458 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8658c89568-xqc66"] Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.431305 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt"] Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.434507 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt"] Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.102319 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn"] Jan 28 18:19:07 crc kubenswrapper[4985]: E0128 18:19:07.102671 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd33a411-202c-41c4-a6b0-cf49ca4945a0" containerName="controller-manager" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.102687 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd33a411-202c-41c4-a6b0-cf49ca4945a0" containerName="controller-manager" Jan 28 18:19:07 crc kubenswrapper[4985]: E0128 18:19:07.102878 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c43298b-f494-48e0-b307-61e702afc5ef" containerName="route-controller-manager" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.102885 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c43298b-f494-48e0-b307-61e702afc5ef" containerName="route-controller-manager" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.102985 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd33a411-202c-41c4-a6b0-cf49ca4945a0" containerName="controller-manager" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.103007 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c43298b-f494-48e0-b307-61e702afc5ef" containerName="route-controller-manager" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.103570 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.105285 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.105589 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.105864 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.106808 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.106838 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.106837 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.110644 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-685b767c78-2pk2s"] Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.111725 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.116038 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.116042 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.116174 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.116439 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.116508 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.118061 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn"] Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.122886 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.123312 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126653 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-client-ca\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126696 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e23934-9cb3-423f-92d4-888a740e00f3-serving-cert\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126726 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-config\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126746 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-config\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126794 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n54nd\" (UniqueName: \"kubernetes.io/projected/75e23934-9cb3-423f-92d4-888a740e00f3-kube-api-access-n54nd\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126809 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-proxy-ca-bundles\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126825 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015deeab-c778-426c-ae5e-c5a0ab596483-serving-cert\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126843 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdd26\" (UniqueName: \"kubernetes.io/projected/015deeab-c778-426c-ae5e-c5a0ab596483-kube-api-access-jdd26\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126945 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-client-ca\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.134936 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-685b767c78-2pk2s"] Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.227802 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-client-ca\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.227893 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e23934-9cb3-423f-92d4-888a740e00f3-serving-cert\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.227931 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-config\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.227964 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-config\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.227999 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n54nd\" (UniqueName: \"kubernetes.io/projected/75e23934-9cb3-423f-92d4-888a740e00f3-kube-api-access-n54nd\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.228017 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-proxy-ca-bundles\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.228034 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015deeab-c778-426c-ae5e-c5a0ab596483-serving-cert\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.228051 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdd26\" (UniqueName: \"kubernetes.io/projected/015deeab-c778-426c-ae5e-c5a0ab596483-kube-api-access-jdd26\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.228081 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-client-ca\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.229331 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-client-ca\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.229380 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-client-ca\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.230702 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-config\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.231214 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-proxy-ca-bundles\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.231475 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-config\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.234502 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e23934-9cb3-423f-92d4-888a740e00f3-serving-cert\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.234685 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015deeab-c778-426c-ae5e-c5a0ab596483-serving-cert\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.253328 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdd26\" (UniqueName: \"kubernetes.io/projected/015deeab-c778-426c-ae5e-c5a0ab596483-kube-api-access-jdd26\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.253431 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n54nd\" (UniqueName: \"kubernetes.io/projected/75e23934-9cb3-423f-92d4-888a740e00f3-kube-api-access-n54nd\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.272501 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c43298b-f494-48e0-b307-61e702afc5ef" path="/var/lib/kubelet/pods/7c43298b-f494-48e0-b307-61e702afc5ef/volumes" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.273276 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd33a411-202c-41c4-a6b0-cf49ca4945a0" path="/var/lib/kubelet/pods/fd33a411-202c-41c4-a6b0-cf49ca4945a0/volumes" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.373056 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.432685 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.432960 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.443340 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.562678 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.629131 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn"] Jan 28 18:19:07 crc kubenswrapper[4985]: W0128 18:19:07.642056 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod015deeab_c778_426c_ae5e_c5a0ab596483.slice/crio-ca4440df3d3dc1f710f5e56dad727aa67c3f72e3f5e9aa92e70564cdf46ea745 WatchSource:0}: Error finding container ca4440df3d3dc1f710f5e56dad727aa67c3f72e3f5e9aa92e70564cdf46ea745: Status 404 returned error can't find the container with id ca4440df3d3dc1f710f5e56dad727aa67c3f72e3f5e9aa92e70564cdf46ea745 Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.673423 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-685b767c78-2pk2s"] Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.871040 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.395331 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" event={"ID":"015deeab-c778-426c-ae5e-c5a0ab596483","Type":"ContainerStarted","Data":"bd0aba61cb8cec3bb2351d6980fecf8b4fca0c8fed2aec2a8b1c136ba370354d"} Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.395472 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.395489 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" event={"ID":"015deeab-c778-426c-ae5e-c5a0ab596483","Type":"ContainerStarted","Data":"ca4440df3d3dc1f710f5e56dad727aa67c3f72e3f5e9aa92e70564cdf46ea745"} Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.398173 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" event={"ID":"75e23934-9cb3-423f-92d4-888a740e00f3","Type":"ContainerStarted","Data":"0920456814e8166a02375b5225682a3378d326919adc34b964ae16737f8fd4a1"} Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.398212 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" event={"ID":"75e23934-9cb3-423f-92d4-888a740e00f3","Type":"ContainerStarted","Data":"105cd5f36d905cb5f852dedc6ba5310ebfc115c0484f7e113137d4c547156ef4"} Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.398473 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.406444 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.415174 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" podStartSLOduration=3.4151583260000002 podStartE2EDuration="3.415158326s" podCreationTimestamp="2026-01-28 18:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:19:08.412375106 +0000 UTC m=+359.238937927" watchObservedRunningTime="2026-01-28 18:19:08.415158326 +0000 UTC m=+359.241721147" Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.437808 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" podStartSLOduration=3.437791599 podStartE2EDuration="3.437791599s" podCreationTimestamp="2026-01-28 18:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:19:08.436731718 +0000 UTC m=+359.263294539" watchObservedRunningTime="2026-01-28 18:19:08.437791599 +0000 UTC m=+359.264354420" Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.580464 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:09 crc kubenswrapper[4985]: I0128 18:19:09.168395 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 18:19:09 crc kubenswrapper[4985]: I0128 18:19:09.782995 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 18:19:09 crc kubenswrapper[4985]: I0128 18:19:09.791696 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 18:19:09 crc kubenswrapper[4985]: I0128 18:19:09.834940 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 18:19:09 crc kubenswrapper[4985]: I0128 18:19:09.838361 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 18:19:10 crc kubenswrapper[4985]: I0128 18:19:10.005812 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 18:19:10 crc kubenswrapper[4985]: I0128 18:19:10.717915 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 18:19:11 crc kubenswrapper[4985]: I0128 18:19:11.186606 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:19:11 crc kubenswrapper[4985]: I0128 18:19:11.186687 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:19:12 crc kubenswrapper[4985]: I0128 18:19:12.124742 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 18:19:12 crc kubenswrapper[4985]: I0128 18:19:12.445241 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 18:19:12 crc kubenswrapper[4985]: I0128 18:19:12.891393 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 18:19:13 crc kubenswrapper[4985]: I0128 18:19:13.212228 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 18:19:16 crc kubenswrapper[4985]: I0128 18:19:16.464344 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 18:19:17 crc kubenswrapper[4985]: I0128 18:19:17.543815 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 18:19:17 crc kubenswrapper[4985]: I0128 18:19:17.687054 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 18:19:18 crc kubenswrapper[4985]: I0128 18:19:18.129671 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 18:19:18 crc kubenswrapper[4985]: I0128 18:19:18.212638 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 18:19:18 crc kubenswrapper[4985]: I0128 18:19:18.392859 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 18:19:18 crc kubenswrapper[4985]: I0128 18:19:18.474371 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 18:19:18 crc kubenswrapper[4985]: I0128 18:19:18.919300 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 18:19:19 crc kubenswrapper[4985]: I0128 18:19:19.737825 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fdfqq"] Jan 28 18:19:20 crc kubenswrapper[4985]: I0128 18:19:20.453566 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 18:19:20 crc kubenswrapper[4985]: I0128 18:19:20.517935 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 18:19:21 crc kubenswrapper[4985]: I0128 18:19:21.469112 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 18:19:22 crc kubenswrapper[4985]: I0128 18:19:22.144311 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 18:19:22 crc kubenswrapper[4985]: I0128 18:19:22.190949 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 18:19:23 crc kubenswrapper[4985]: I0128 18:19:23.158826 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 18:19:25 crc kubenswrapper[4985]: I0128 18:19:25.512624 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-685b767c78-2pk2s"] Jan 28 18:19:25 crc kubenswrapper[4985]: I0128 18:19:25.512976 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" podUID="75e23934-9cb3-423f-92d4-888a740e00f3" containerName="controller-manager" containerID="cri-o://0920456814e8166a02375b5225682a3378d326919adc34b964ae16737f8fd4a1" gracePeriod=30 Jan 28 18:19:25 crc kubenswrapper[4985]: E0128 18:19:25.648468 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75e23934_9cb3_423f_92d4_888a740e00f3.slice/crio-0920456814e8166a02375b5225682a3378d326919adc34b964ae16737f8fd4a1.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:19:25 crc kubenswrapper[4985]: I0128 18:19:25.739059 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.507950 4985 generic.go:334] "Generic (PLEG): container finished" podID="75e23934-9cb3-423f-92d4-888a740e00f3" containerID="0920456814e8166a02375b5225682a3378d326919adc34b964ae16737f8fd4a1" exitCode=0 Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.508067 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" event={"ID":"75e23934-9cb3-423f-92d4-888a740e00f3","Type":"ContainerDied","Data":"0920456814e8166a02375b5225682a3378d326919adc34b964ae16737f8fd4a1"} Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.604709 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.710599 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.745785 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.773136 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-656679f4c7-mmrtg"] Jan 28 18:19:26 crc kubenswrapper[4985]: E0128 18:19:26.773384 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e23934-9cb3-423f-92d4-888a740e00f3" containerName="controller-manager" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.773399 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e23934-9cb3-423f-92d4-888a740e00f3" containerName="controller-manager" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.773526 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e23934-9cb3-423f-92d4-888a740e00f3" containerName="controller-manager" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.773933 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.814189 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-656679f4c7-mmrtg"] Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.915723 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-client-ca\") pod \"75e23934-9cb3-423f-92d4-888a740e00f3\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.915834 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-config\") pod \"75e23934-9cb3-423f-92d4-888a740e00f3\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.915886 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n54nd\" (UniqueName: \"kubernetes.io/projected/75e23934-9cb3-423f-92d4-888a740e00f3-kube-api-access-n54nd\") pod \"75e23934-9cb3-423f-92d4-888a740e00f3\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.915958 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-proxy-ca-bundles\") pod \"75e23934-9cb3-423f-92d4-888a740e00f3\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.916014 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e23934-9cb3-423f-92d4-888a740e00f3-serving-cert\") pod \"75e23934-9cb3-423f-92d4-888a740e00f3\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.916314 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0590b9a-abcc-4541-9914-675dc0ca1976-serving-cert\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.916368 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxjc\" (UniqueName: \"kubernetes.io/projected/a0590b9a-abcc-4541-9914-675dc0ca1976-kube-api-access-tvxjc\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.916412 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-config\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.916440 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-client-ca\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.916491 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-proxy-ca-bundles\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.917614 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "75e23934-9cb3-423f-92d4-888a740e00f3" (UID: "75e23934-9cb3-423f-92d4-888a740e00f3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.917817 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-config" (OuterVolumeSpecName: "config") pod "75e23934-9cb3-423f-92d4-888a740e00f3" (UID: "75e23934-9cb3-423f-92d4-888a740e00f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.918328 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-client-ca" (OuterVolumeSpecName: "client-ca") pod "75e23934-9cb3-423f-92d4-888a740e00f3" (UID: "75e23934-9cb3-423f-92d4-888a740e00f3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.923740 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e23934-9cb3-423f-92d4-888a740e00f3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "75e23934-9cb3-423f-92d4-888a740e00f3" (UID: "75e23934-9cb3-423f-92d4-888a740e00f3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.927424 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e23934-9cb3-423f-92d4-888a740e00f3-kube-api-access-n54nd" (OuterVolumeSpecName: "kube-api-access-n54nd") pod "75e23934-9cb3-423f-92d4-888a740e00f3" (UID: "75e23934-9cb3-423f-92d4-888a740e00f3"). InnerVolumeSpecName "kube-api-access-n54nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.009646 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018304 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvxjc\" (UniqueName: \"kubernetes.io/projected/a0590b9a-abcc-4541-9914-675dc0ca1976-kube-api-access-tvxjc\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018376 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-config\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018407 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-client-ca\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018447 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-proxy-ca-bundles\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018507 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0590b9a-abcc-4541-9914-675dc0ca1976-serving-cert\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018548 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018561 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e23934-9cb3-423f-92d4-888a740e00f3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018570 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018579 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018591 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n54nd\" (UniqueName: \"kubernetes.io/projected/75e23934-9cb3-423f-92d4-888a740e00f3-kube-api-access-n54nd\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.020586 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-client-ca\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.020679 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-proxy-ca-bundles\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.021989 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-config\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.023777 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0590b9a-abcc-4541-9914-675dc0ca1976-serving-cert\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.046369 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvxjc\" (UniqueName: \"kubernetes.io/projected/a0590b9a-abcc-4541-9914-675dc0ca1976-kube-api-access-tvxjc\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.103768 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.517975 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" event={"ID":"75e23934-9cb3-423f-92d4-888a740e00f3","Type":"ContainerDied","Data":"105cd5f36d905cb5f852dedc6ba5310ebfc115c0484f7e113137d4c547156ef4"} Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.518078 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.518638 4985 scope.go:117] "RemoveContainer" containerID="0920456814e8166a02375b5225682a3378d326919adc34b964ae16737f8fd4a1" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.545774 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-685b767c78-2pk2s"] Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.552240 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-685b767c78-2pk2s"] Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.598998 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-656679f4c7-mmrtg"] Jan 28 18:19:27 crc kubenswrapper[4985]: W0128 18:19:27.607160 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0590b9a_abcc_4541_9914_675dc0ca1976.slice/crio-941c6bfd322e3e4ce80a380a1c59b768a8e5b3e90786970cef77e19ab5eb8c35 WatchSource:0}: Error finding container 941c6bfd322e3e4ce80a380a1c59b768a8e5b3e90786970cef77e19ab5eb8c35: Status 404 returned error can't find the container with id 941c6bfd322e3e4ce80a380a1c59b768a8e5b3e90786970cef77e19ab5eb8c35 Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.629492 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 18:19:28 crc kubenswrapper[4985]: I0128 18:19:28.525787 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" event={"ID":"a0590b9a-abcc-4541-9914-675dc0ca1976","Type":"ContainerStarted","Data":"03338a45259e63ff86a5b162e1f76627fc9bb12f10aaf142f4c25f67a1bbfd5c"} Jan 28 18:19:28 crc kubenswrapper[4985]: I0128 18:19:28.525836 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" event={"ID":"a0590b9a-abcc-4541-9914-675dc0ca1976","Type":"ContainerStarted","Data":"941c6bfd322e3e4ce80a380a1c59b768a8e5b3e90786970cef77e19ab5eb8c35"} Jan 28 18:19:28 crc kubenswrapper[4985]: I0128 18:19:28.526092 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:28 crc kubenswrapper[4985]: I0128 18:19:28.538137 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:28 crc kubenswrapper[4985]: I0128 18:19:28.567051 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podStartSLOduration=3.567015363 podStartE2EDuration="3.567015363s" podCreationTimestamp="2026-01-28 18:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:19:28.542896128 +0000 UTC m=+379.369458949" watchObservedRunningTime="2026-01-28 18:19:28.567015363 +0000 UTC m=+379.393578184" Jan 28 18:19:28 crc kubenswrapper[4985]: I0128 18:19:28.928345 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 18:19:29 crc kubenswrapper[4985]: I0128 18:19:29.262864 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 18:19:29 crc kubenswrapper[4985]: I0128 18:19:29.270821 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75e23934-9cb3-423f-92d4-888a740e00f3" path="/var/lib/kubelet/pods/75e23934-9cb3-423f-92d4-888a740e00f3/volumes" Jan 28 18:19:29 crc kubenswrapper[4985]: I0128 18:19:29.432078 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 18:19:29 crc kubenswrapper[4985]: I0128 18:19:29.713172 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 18:19:30 crc kubenswrapper[4985]: I0128 18:19:30.042321 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 18:19:30 crc kubenswrapper[4985]: I0128 18:19:30.635451 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 18:19:30 crc kubenswrapper[4985]: I0128 18:19:30.700410 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 18:19:31 crc kubenswrapper[4985]: I0128 18:19:31.110624 4985 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 18:19:31 crc kubenswrapper[4985]: I0128 18:19:31.925138 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 18:19:31 crc kubenswrapper[4985]: I0128 18:19:31.982556 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 18:19:32 crc kubenswrapper[4985]: I0128 18:19:32.024925 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 18:19:32 crc kubenswrapper[4985]: I0128 18:19:32.168575 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 18:19:32 crc kubenswrapper[4985]: I0128 18:19:32.311294 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 18:19:32 crc kubenswrapper[4985]: I0128 18:19:32.362294 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 18:19:32 crc kubenswrapper[4985]: I0128 18:19:32.563159 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 18:19:33 crc kubenswrapper[4985]: I0128 18:19:33.549372 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 18:19:35 crc kubenswrapper[4985]: I0128 18:19:35.471308 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 18:19:35 crc kubenswrapper[4985]: I0128 18:19:35.605634 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 18:19:36 crc kubenswrapper[4985]: I0128 18:19:36.405081 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 18:19:36 crc kubenswrapper[4985]: I0128 18:19:36.849057 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 18:19:36 crc kubenswrapper[4985]: I0128 18:19:36.942522 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 18:19:41 crc kubenswrapper[4985]: I0128 18:19:41.185858 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:19:41 crc kubenswrapper[4985]: I0128 18:19:41.185954 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.923196 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7"] Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.925433 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.928159 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.928654 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.931529 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7"] Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.934277 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.934393 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.934394 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.942650 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.056707 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a73cc747-1671-4ae3-8784-3087a06b300c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.057141 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz29k\" (UniqueName: \"kubernetes.io/projected/a73cc747-1671-4ae3-8784-3087a06b300c-kube-api-access-gz29k\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.057280 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a73cc747-1671-4ae3-8784-3087a06b300c-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.158214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a73cc747-1671-4ae3-8784-3087a06b300c-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.158362 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a73cc747-1671-4ae3-8784-3087a06b300c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.158410 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz29k\" (UniqueName: \"kubernetes.io/projected/a73cc747-1671-4ae3-8784-3087a06b300c-kube-api-access-gz29k\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.159387 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a73cc747-1671-4ae3-8784-3087a06b300c-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.179728 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a73cc747-1671-4ae3-8784-3087a06b300c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.184451 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz29k\" (UniqueName: \"kubernetes.io/projected/a73cc747-1671-4ae3-8784-3087a06b300c-kube-api-access-gz29k\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.247893 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.710086 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7"] Jan 28 18:19:43 crc kubenswrapper[4985]: W0128 18:19:43.718486 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda73cc747_1671_4ae3_8784_3087a06b300c.slice/crio-85766ca89e109c438601bbfe442aa785dad6b81873524afe9a524ae10859e445 WatchSource:0}: Error finding container 85766ca89e109c438601bbfe442aa785dad6b81873524afe9a524ae10859e445: Status 404 returned error can't find the container with id 85766ca89e109c438601bbfe442aa785dad6b81873524afe9a524ae10859e445 Jan 28 18:19:44 crc kubenswrapper[4985]: I0128 18:19:44.624465 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" event={"ID":"a73cc747-1671-4ae3-8784-3087a06b300c","Type":"ContainerStarted","Data":"85766ca89e109c438601bbfe442aa785dad6b81873524afe9a524ae10859e445"} Jan 28 18:19:44 crc kubenswrapper[4985]: I0128 18:19:44.782350 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerName="oauth-openshift" containerID="cri-o://4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5" gracePeriod=15 Jan 28 18:19:44 crc kubenswrapper[4985]: I0128 18:19:44.797160 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.336984 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.393000 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56cf947455-bgjvj"] Jan 28 18:19:45 crc kubenswrapper[4985]: E0128 18:19:45.393345 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerName="oauth-openshift" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.393363 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerName="oauth-openshift" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.393482 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerName="oauth-openshift" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.394003 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.399273 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56cf947455-bgjvj"] Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.490872 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcmdc\" (UniqueName: \"kubernetes.io/projected/d061f6d6-1983-405d-93af-3e492ff49f7c-kube-api-access-jcmdc\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.490923 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-idp-0-file-data\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.491011 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-error\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.491041 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-dir\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.491067 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-ocp-branding-template\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.491090 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-trusted-ca-bundle\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.491113 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-login\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.491268 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492272 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-serving-cert\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492327 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-provider-selection\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492305 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492362 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-session\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492585 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-policies\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492623 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-service-ca\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492648 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-cliconfig\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492678 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-router-certs\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492837 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-router-certs\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492881 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492905 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492935 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-service-ca\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492962 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f077e962-d9b2-45c5-a87e-1dd03ad0378b-audit-dir\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492985 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-session\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493017 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-error\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493103 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493139 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493163 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-login\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493222 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493304 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493332 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7frd\" (UniqueName: \"kubernetes.io/projected/f077e962-d9b2-45c5-a87e-1dd03ad0378b-kube-api-access-h7frd\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493359 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-audit-policies\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493437 4985 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493452 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493474 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.494114 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.499048 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.507550 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d061f6d6-1983-405d-93af-3e492ff49f7c-kube-api-access-jcmdc" (OuterVolumeSpecName: "kube-api-access-jcmdc") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "kube-api-access-jcmdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.507660 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.508804 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.509527 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.510136 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.510257 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.510330 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.524531 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.525441 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.532106 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn"] Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.532888 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" podUID="015deeab-c778-426c-ae5e-c5a0ab596483" containerName="route-controller-manager" containerID="cri-o://bd0aba61cb8cec3bb2351d6980fecf8b4fca0c8fed2aec2a8b1c136ba370354d" gracePeriod=30 Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595145 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595244 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-login\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595307 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595334 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595358 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7frd\" (UniqueName: \"kubernetes.io/projected/f077e962-d9b2-45c5-a87e-1dd03ad0378b-kube-api-access-h7frd\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595382 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-audit-policies\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595429 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-router-certs\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596493 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-audit-policies\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596564 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596594 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596648 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-service-ca\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596675 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f077e962-d9b2-45c5-a87e-1dd03ad0378b-audit-dir\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596621 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596701 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-session\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596732 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-error\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596798 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597310 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597318 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-service-ca\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597356 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f077e962-d9b2-45c5-a87e-1dd03ad0378b-audit-dir\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597665 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597692 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597706 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597719 4985 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597731 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597745 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597759 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597771 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcmdc\" (UniqueName: \"kubernetes.io/projected/d061f6d6-1983-405d-93af-3e492ff49f7c-kube-api-access-jcmdc\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597792 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597804 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597815 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.599028 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-login\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.599540 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.600184 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.600525 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-error\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.600733 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-session\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.601565 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.602462 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-router-certs\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.602837 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.612921 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7frd\" (UniqueName: \"kubernetes.io/projected/f077e962-d9b2-45c5-a87e-1dd03ad0378b-kube-api-access-h7frd\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.649793 4985 generic.go:334] "Generic (PLEG): container finished" podID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerID="4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5" exitCode=0 Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.649853 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" event={"ID":"d061f6d6-1983-405d-93af-3e492ff49f7c","Type":"ContainerDied","Data":"4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5"} Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.649891 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" event={"ID":"d061f6d6-1983-405d-93af-3e492ff49f7c","Type":"ContainerDied","Data":"92eb3ea915f09fd028998d05f1f049bc1e5781547f5807090433223897100c78"} Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.649913 4985 scope.go:117] "RemoveContainer" containerID="4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.650064 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.698912 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fdfqq"] Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.706797 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fdfqq"] Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.715552 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.378287 4985 scope.go:117] "RemoveContainer" containerID="4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5" Jan 28 18:19:46 crc kubenswrapper[4985]: E0128 18:19:46.379050 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5\": container with ID starting with 4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5 not found: ID does not exist" containerID="4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.379086 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5"} err="failed to get container status \"4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5\": rpc error: code = NotFound desc = could not find container \"4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5\": container with ID starting with 4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5 not found: ID does not exist" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.657671 4985 generic.go:334] "Generic (PLEG): container finished" podID="015deeab-c778-426c-ae5e-c5a0ab596483" containerID="bd0aba61cb8cec3bb2351d6980fecf8b4fca0c8fed2aec2a8b1c136ba370354d" exitCode=0 Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.657732 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" event={"ID":"015deeab-c778-426c-ae5e-c5a0ab596483","Type":"ContainerDied","Data":"bd0aba61cb8cec3bb2351d6980fecf8b4fca0c8fed2aec2a8b1c136ba370354d"} Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.762237 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.807689 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p"] Jan 28 18:19:46 crc kubenswrapper[4985]: E0128 18:19:46.808010 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015deeab-c778-426c-ae5e-c5a0ab596483" containerName="route-controller-manager" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.808035 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="015deeab-c778-426c-ae5e-c5a0ab596483" containerName="route-controller-manager" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.808198 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="015deeab-c778-426c-ae5e-c5a0ab596483" containerName="route-controller-manager" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.808714 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p"] Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.808852 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.885770 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56cf947455-bgjvj"] Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.921734 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdd26\" (UniqueName: \"kubernetes.io/projected/015deeab-c778-426c-ae5e-c5a0ab596483-kube-api-access-jdd26\") pod \"015deeab-c778-426c-ae5e-c5a0ab596483\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.921790 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-client-ca\") pod \"015deeab-c778-426c-ae5e-c5a0ab596483\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.921812 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-config\") pod \"015deeab-c778-426c-ae5e-c5a0ab596483\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.921839 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015deeab-c778-426c-ae5e-c5a0ab596483-serving-cert\") pod \"015deeab-c778-426c-ae5e-c5a0ab596483\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.922054 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983beebe-f0c3-4fba-9861-0ea007559cc5-config\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.922093 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/983beebe-f0c3-4fba-9861-0ea007559cc5-client-ca\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.922109 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27n5s\" (UniqueName: \"kubernetes.io/projected/983beebe-f0c3-4fba-9861-0ea007559cc5-kube-api-access-27n5s\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.922143 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/983beebe-f0c3-4fba-9861-0ea007559cc5-serving-cert\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.923011 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-client-ca" (OuterVolumeSpecName: "client-ca") pod "015deeab-c778-426c-ae5e-c5a0ab596483" (UID: "015deeab-c778-426c-ae5e-c5a0ab596483"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.923764 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-config" (OuterVolumeSpecName: "config") pod "015deeab-c778-426c-ae5e-c5a0ab596483" (UID: "015deeab-c778-426c-ae5e-c5a0ab596483"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.928711 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015deeab-c778-426c-ae5e-c5a0ab596483-kube-api-access-jdd26" (OuterVolumeSpecName: "kube-api-access-jdd26") pod "015deeab-c778-426c-ae5e-c5a0ab596483" (UID: "015deeab-c778-426c-ae5e-c5a0ab596483"). InnerVolumeSpecName "kube-api-access-jdd26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.931053 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015deeab-c778-426c-ae5e-c5a0ab596483-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "015deeab-c778-426c-ae5e-c5a0ab596483" (UID: "015deeab-c778-426c-ae5e-c5a0ab596483"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.025882 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/983beebe-f0c3-4fba-9861-0ea007559cc5-serving-cert\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026008 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983beebe-f0c3-4fba-9861-0ea007559cc5-config\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026057 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/983beebe-f0c3-4fba-9861-0ea007559cc5-client-ca\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026079 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27n5s\" (UniqueName: \"kubernetes.io/projected/983beebe-f0c3-4fba-9861-0ea007559cc5-kube-api-access-27n5s\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026174 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdd26\" (UniqueName: \"kubernetes.io/projected/015deeab-c778-426c-ae5e-c5a0ab596483-kube-api-access-jdd26\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026187 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026197 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026207 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015deeab-c778-426c-ae5e-c5a0ab596483-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.029556 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983beebe-f0c3-4fba-9861-0ea007559cc5-config\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.035650 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/983beebe-f0c3-4fba-9861-0ea007559cc5-client-ca\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.044150 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/983beebe-f0c3-4fba-9861-0ea007559cc5-serving-cert\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.046267 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27n5s\" (UniqueName: \"kubernetes.io/projected/983beebe-f0c3-4fba-9861-0ea007559cc5-kube-api-access-27n5s\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.137046 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.272498 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" path="/var/lib/kubelet/pods/d061f6d6-1983-405d-93af-3e492ff49f7c/volumes" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.680047 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" event={"ID":"f077e962-d9b2-45c5-a87e-1dd03ad0378b","Type":"ContainerStarted","Data":"47b2958f11c39ade31c2e91339ddcd95d53ee549c27d8c34ef46c24ef5c02a95"} Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.680107 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" event={"ID":"f077e962-d9b2-45c5-a87e-1dd03ad0378b","Type":"ContainerStarted","Data":"0951de6b9b7fd10049d964696b15d69e2ae8d48e6cfa6f5e0697f4865e129509"} Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.680319 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.682980 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" event={"ID":"015deeab-c778-426c-ae5e-c5a0ab596483","Type":"ContainerDied","Data":"ca4440df3d3dc1f710f5e56dad727aa67c3f72e3f5e9aa92e70564cdf46ea745"} Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.683011 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.683396 4985 scope.go:117] "RemoveContainer" containerID="bd0aba61cb8cec3bb2351d6980fecf8b4fca0c8fed2aec2a8b1c136ba370354d" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.725431 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podStartSLOduration=28.725403699 podStartE2EDuration="28.725403699s" podCreationTimestamp="2026-01-28 18:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:19:47.71149865 +0000 UTC m=+398.538061471" watchObservedRunningTime="2026-01-28 18:19:47.725403699 +0000 UTC m=+398.551966520" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.726678 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn"] Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.729830 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn"] Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.827417 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.037209 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p"] Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.416812 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8"] Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.418004 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.423530 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.424479 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-hx5bp" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.431692 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8"] Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.548299 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/81fa949b-5c24-44da-aa29-bd34bcc39d6e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-mttz8\" (UID: \"81fa949b-5c24-44da-aa29-bd34bcc39d6e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.650043 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/81fa949b-5c24-44da-aa29-bd34bcc39d6e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-mttz8\" (UID: \"81fa949b-5c24-44da-aa29-bd34bcc39d6e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.657694 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/81fa949b-5c24-44da-aa29-bd34bcc39d6e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-mttz8\" (UID: \"81fa949b-5c24-44da-aa29-bd34bcc39d6e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.692608 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" event={"ID":"983beebe-f0c3-4fba-9861-0ea007559cc5","Type":"ContainerStarted","Data":"4c2347925908cece1c999f90b8a277d5f7b9d3d6eceb91e039c8ca2437637fea"} Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.692672 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" event={"ID":"983beebe-f0c3-4fba-9861-0ea007559cc5","Type":"ContainerStarted","Data":"b53b54af51049149b33261bcc18ee5951c7a5aca757e8ef97983d99658b276f4"} Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.694307 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.696499 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" event={"ID":"a73cc747-1671-4ae3-8784-3087a06b300c","Type":"ContainerStarted","Data":"4d9d34679f8306214025d40e7e05333a430787a96e91ea1d0b9bfda90f1f5e96"} Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.706388 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.719660 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podStartSLOduration=3.719635791 podStartE2EDuration="3.719635791s" podCreationTimestamp="2026-01-28 18:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:19:48.714461483 +0000 UTC m=+399.541024304" watchObservedRunningTime="2026-01-28 18:19:48.719635791 +0000 UTC m=+399.546198612" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.730211 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" podStartSLOduration=2.84696181 podStartE2EDuration="6.730192433s" podCreationTimestamp="2026-01-28 18:19:42 +0000 UTC" firstStartedPulling="2026-01-28 18:19:43.721799397 +0000 UTC m=+394.548362218" lastFinishedPulling="2026-01-28 18:19:47.60503002 +0000 UTC m=+398.431592841" observedRunningTime="2026-01-28 18:19:48.729075421 +0000 UTC m=+399.555638262" watchObservedRunningTime="2026-01-28 18:19:48.730192433 +0000 UTC m=+399.556755264" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.735933 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:49 crc kubenswrapper[4985]: I0128 18:19:49.173871 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8"] Jan 28 18:19:49 crc kubenswrapper[4985]: I0128 18:19:49.271596 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015deeab-c778-426c-ae5e-c5a0ab596483" path="/var/lib/kubelet/pods/015deeab-c778-426c-ae5e-c5a0ab596483/volumes" Jan 28 18:19:49 crc kubenswrapper[4985]: I0128 18:19:49.705302 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" event={"ID":"81fa949b-5c24-44da-aa29-bd34bcc39d6e","Type":"ContainerStarted","Data":"5ad8c6a87ba49fd9a2dede8b5f892714a6f9410e12e2ed608e32ce98f6fc28b2"} Jan 28 18:19:51 crc kubenswrapper[4985]: I0128 18:19:51.718926 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" event={"ID":"81fa949b-5c24-44da-aa29-bd34bcc39d6e","Type":"ContainerStarted","Data":"555b2897b605937380ab9cdf98df1b3029b5fd9c1370b8b411db0cd55c5d3b47"} Jan 28 18:19:51 crc kubenswrapper[4985]: I0128 18:19:51.719700 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:51 crc kubenswrapper[4985]: I0128 18:19:51.727391 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:51 crc kubenswrapper[4985]: I0128 18:19:51.744085 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podStartSLOduration=2.317591498 podStartE2EDuration="3.744050592s" podCreationTimestamp="2026-01-28 18:19:48 +0000 UTC" firstStartedPulling="2026-01-28 18:19:49.187108702 +0000 UTC m=+400.013671543" lastFinishedPulling="2026-01-28 18:19:50.613567816 +0000 UTC m=+401.440130637" observedRunningTime="2026-01-28 18:19:51.737643718 +0000 UTC m=+402.564206589" watchObservedRunningTime="2026-01-28 18:19:51.744050592 +0000 UTC m=+402.570613423" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.482348 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-mxz2k"] Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.483468 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.486574 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.486682 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-r99tt" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.488604 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.499705 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.505736 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-mxz2k"] Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.606557 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mglz\" (UniqueName: \"kubernetes.io/projected/70e8a5a1-0234-4693-910c-97980980b102-kube-api-access-2mglz\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.607314 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/70e8a5a1-0234-4693-910c-97980980b102-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.607527 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/70e8a5a1-0234-4693-910c-97980980b102-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.607680 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/70e8a5a1-0234-4693-910c-97980980b102-metrics-client-ca\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.708991 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mglz\" (UniqueName: \"kubernetes.io/projected/70e8a5a1-0234-4693-910c-97980980b102-kube-api-access-2mglz\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.709039 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/70e8a5a1-0234-4693-910c-97980980b102-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.709078 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/70e8a5a1-0234-4693-910c-97980980b102-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.709108 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/70e8a5a1-0234-4693-910c-97980980b102-metrics-client-ca\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.710208 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/70e8a5a1-0234-4693-910c-97980980b102-metrics-client-ca\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.716390 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/70e8a5a1-0234-4693-910c-97980980b102-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.726811 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/70e8a5a1-0234-4693-910c-97980980b102-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.732818 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mglz\" (UniqueName: \"kubernetes.io/projected/70e8a5a1-0234-4693-910c-97980980b102-kube-api-access-2mglz\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.800922 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:53 crc kubenswrapper[4985]: I0128 18:19:53.242841 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-mxz2k"] Jan 28 18:19:53 crc kubenswrapper[4985]: W0128 18:19:53.250679 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70e8a5a1_0234_4693_910c_97980980b102.slice/crio-c20820d2dfb0ea50e0ce5ca03f78d106a68cf341dca616ef74017b6e644b6a3e WatchSource:0}: Error finding container c20820d2dfb0ea50e0ce5ca03f78d106a68cf341dca616ef74017b6e644b6a3e: Status 404 returned error can't find the container with id c20820d2dfb0ea50e0ce5ca03f78d106a68cf341dca616ef74017b6e644b6a3e Jan 28 18:19:53 crc kubenswrapper[4985]: I0128 18:19:53.734515 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" event={"ID":"70e8a5a1-0234-4693-910c-97980980b102","Type":"ContainerStarted","Data":"c20820d2dfb0ea50e0ce5ca03f78d106a68cf341dca616ef74017b6e644b6a3e"} Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.302419 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-77p8r"] Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.303644 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.315741 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-77p8r"] Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343403 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/69277fd0-66c2-4094-87fd-eaa80e756e75-registry-certificates\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343468 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/69277fd0-66c2-4094-87fd-eaa80e756e75-installation-pull-secrets\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343501 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-bound-sa-token\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343553 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-registry-tls\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343670 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/69277fd0-66c2-4094-87fd-eaa80e756e75-ca-trust-extracted\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343769 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343830 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkk9d\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-kube-api-access-qkk9d\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343995 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69277fd0-66c2-4094-87fd-eaa80e756e75-trusted-ca\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.385157 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445484 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-bound-sa-token\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445550 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-registry-tls\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445567 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/69277fd0-66c2-4094-87fd-eaa80e756e75-ca-trust-extracted\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445601 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkk9d\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-kube-api-access-qkk9d\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69277fd0-66c2-4094-87fd-eaa80e756e75-trusted-ca\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445679 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/69277fd0-66c2-4094-87fd-eaa80e756e75-registry-certificates\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445706 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/69277fd0-66c2-4094-87fd-eaa80e756e75-installation-pull-secrets\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.446595 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/69277fd0-66c2-4094-87fd-eaa80e756e75-ca-trust-extracted\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.447723 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69277fd0-66c2-4094-87fd-eaa80e756e75-trusted-ca\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.447881 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/69277fd0-66c2-4094-87fd-eaa80e756e75-registry-certificates\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.455102 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-registry-tls\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.455125 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/69277fd0-66c2-4094-87fd-eaa80e756e75-installation-pull-secrets\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.465855 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-bound-sa-token\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.466843 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkk9d\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-kube-api-access-qkk9d\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.618975 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.776451 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" event={"ID":"70e8a5a1-0234-4693-910c-97980980b102","Type":"ContainerStarted","Data":"4efeb3302ce3218e0f29eb596d414362b4674693cb8a67b347d35ad6f826c17e"} Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.776523 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" event={"ID":"70e8a5a1-0234-4693-910c-97980980b102","Type":"ContainerStarted","Data":"7bc4db6ba3d136cacf0c597a1bf4a228f3460fc9d84dc339cabe2a224d6c1072"} Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.806849 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" podStartSLOduration=2.089136971 podStartE2EDuration="3.806819639s" podCreationTimestamp="2026-01-28 18:19:52 +0000 UTC" firstStartedPulling="2026-01-28 18:19:53.253388729 +0000 UTC m=+404.079951550" lastFinishedPulling="2026-01-28 18:19:54.971071397 +0000 UTC m=+405.797634218" observedRunningTime="2026-01-28 18:19:55.796238796 +0000 UTC m=+406.622801637" watchObservedRunningTime="2026-01-28 18:19:55.806819639 +0000 UTC m=+406.633382460" Jan 28 18:19:56 crc kubenswrapper[4985]: I0128 18:19:56.105863 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-77p8r"] Jan 28 18:19:56 crc kubenswrapper[4985]: I0128 18:19:56.782644 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" event={"ID":"69277fd0-66c2-4094-87fd-eaa80e756e75","Type":"ContainerStarted","Data":"6bdfd07d3b55ddb6af1fcc2d993de932c84c3ee26107404883529b1bdf54dc61"} Jan 28 18:19:56 crc kubenswrapper[4985]: I0128 18:19:56.782714 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" event={"ID":"69277fd0-66c2-4094-87fd-eaa80e756e75","Type":"ContainerStarted","Data":"50cdbd822fd2758d9c3fa89ee4f0f4f65a8089e10e59beb3b95396b2dc9a8a5e"} Jan 28 18:19:56 crc kubenswrapper[4985]: I0128 18:19:56.807134 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" podStartSLOduration=1.8071166440000002 podStartE2EDuration="1.807116644s" podCreationTimestamp="2026-01-28 18:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:19:56.806126896 +0000 UTC m=+407.632689727" watchObservedRunningTime="2026-01-28 18:19:56.807116644 +0000 UTC m=+407.633679465" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.788316 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.977725 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f"] Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.979429 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.982192 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.982546 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.989131 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.989429 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.989729 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-swjfk" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.990074 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.990180 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvs8d\" (UniqueName: \"kubernetes.io/projected/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-api-access-hvs8d\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.990238 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.990561 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.991479 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.992925 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-g869q"] Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.994305 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.997184 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-w79sb" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.997191 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.002217 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.005847 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f"] Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.009788 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-zc8rm"] Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.011653 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.013647 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-g869q"] Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.015489 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.016199 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9gb4s" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.018397 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.092304 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.092398 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxt2j\" (UniqueName: \"kubernetes.io/projected/3d51c83d-3649-47dc-84a7-696f09f28238-kube-api-access-nxt2j\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.092439 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-tls\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.092459 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.092557 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-root\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.092721 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: E0128 18:19:58.092902 4985 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Jan 28 18:19:58 crc kubenswrapper[4985]: E0128 18:19:58.092986 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-tls podName:75ed6fc2-db87-4a97-8c9f-1ff8451a9b73 nodeName:}" failed. No retries permitted until 2026-01-28 18:19:58.59296423 +0000 UTC m=+409.419527051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-lht9f" (UID: "75ed6fc2-db87-4a97-8c9f-1ff8451a9b73") : secret "kube-state-metrics-tls" not found Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.093381 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-wtmp\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.093499 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.093541 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.093647 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.094490 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-textfile\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.094703 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d51c83d-3649-47dc-84a7-696f09f28238-metrics-client-ca\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.094925 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szsct\" (UniqueName: \"kubernetes.io/projected/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-kube-api-access-szsct\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.095326 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.095463 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.095624 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvs8d\" (UniqueName: \"kubernetes.io/projected/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-api-access-hvs8d\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.095790 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-sys\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.095997 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.094756 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.094750 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.095675 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.103781 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.118557 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvs8d\" (UniqueName: \"kubernetes.io/projected/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-api-access-hvs8d\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198157 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-textfile\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198212 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d51c83d-3649-47dc-84a7-696f09f28238-metrics-client-ca\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198235 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szsct\" (UniqueName: \"kubernetes.io/projected/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-kube-api-access-szsct\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198290 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198319 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-sys\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198347 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198378 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt2j\" (UniqueName: \"kubernetes.io/projected/3d51c83d-3649-47dc-84a7-696f09f28238-kube-api-access-nxt2j\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198405 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198421 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-root\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198438 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-tls\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198491 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-wtmp\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198526 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.199174 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-root\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.199525 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-sys\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.199616 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-wtmp\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.199959 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-textfile\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.200650 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d51c83d-3649-47dc-84a7-696f09f28238-metrics-client-ca\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.201237 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.204625 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.204700 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.206164 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-tls\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.208922 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.224881 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szsct\" (UniqueName: \"kubernetes.io/projected/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-kube-api-access-szsct\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.225850 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxt2j\" (UniqueName: \"kubernetes.io/projected/3d51c83d-3649-47dc-84a7-696f09f28238-kube-api-access-nxt2j\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.318159 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.335705 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: W0128 18:19:58.361888 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d51c83d_3649_47dc_84a7_696f09f28238.slice/crio-acfeeeb400a349297ae14424e36b5a978881e534e9e563514bc14fc53256004f WatchSource:0}: Error finding container acfeeeb400a349297ae14424e36b5a978881e534e9e563514bc14fc53256004f: Status 404 returned error can't find the container with id acfeeeb400a349297ae14424e36b5a978881e534e9e563514bc14fc53256004f Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.605334 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.611169 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.767153 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-g869q"] Jan 28 18:19:58 crc kubenswrapper[4985]: W0128 18:19:58.781295 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e14cd8d_2ff4_47bb_9b7f_ddc913b81ab7.slice/crio-8a4b0c783facd76925fabd05c38e4ecf2c419400de8b6374771a1459ffc70ad0 WatchSource:0}: Error finding container 8a4b0c783facd76925fabd05c38e4ecf2c419400de8b6374771a1459ffc70ad0: Status 404 returned error can't find the container with id 8a4b0c783facd76925fabd05c38e4ecf2c419400de8b6374771a1459ffc70ad0 Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.794301 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" event={"ID":"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7","Type":"ContainerStarted","Data":"8a4b0c783facd76925fabd05c38e4ecf2c419400de8b6374771a1459ffc70ad0"} Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.796778 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zc8rm" event={"ID":"3d51c83d-3649-47dc-84a7-696f09f28238","Type":"ContainerStarted","Data":"acfeeeb400a349297ae14424e36b5a978881e534e9e563514bc14fc53256004f"} Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.908055 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.070527 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.073231 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.082925 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.082973 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-4gvjp" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.083177 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.083354 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.094880 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.097428 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.099929 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.102475 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.104289 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125383 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-config-volume\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125466 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1321027d-6616-4539-9eef-555f2ef23ecb-config-out\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125509 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125575 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1321027d-6616-4539-9eef-555f2ef23ecb-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125614 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125651 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1321027d-6616-4539-9eef-555f2ef23ecb-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125722 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125770 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125834 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1321027d-6616-4539-9eef-555f2ef23ecb-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125874 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-web-config\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125907 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn8tz\" (UniqueName: \"kubernetes.io/projected/1321027d-6616-4539-9eef-555f2ef23ecb-kube-api-access-vn8tz\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125965 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1321027d-6616-4539-9eef-555f2ef23ecb-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.128946 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227017 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227119 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227147 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1321027d-6616-4539-9eef-555f2ef23ecb-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-web-config\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227204 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn8tz\" (UniqueName: \"kubernetes.io/projected/1321027d-6616-4539-9eef-555f2ef23ecb-kube-api-access-vn8tz\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227232 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1321027d-6616-4539-9eef-555f2ef23ecb-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227290 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-config-volume\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227324 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1321027d-6616-4539-9eef-555f2ef23ecb-config-out\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227349 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227385 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1321027d-6616-4539-9eef-555f2ef23ecb-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227411 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227436 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1321027d-6616-4539-9eef-555f2ef23ecb-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.228645 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1321027d-6616-4539-9eef-555f2ef23ecb-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.228913 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1321027d-6616-4539-9eef-555f2ef23ecb-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.230357 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1321027d-6616-4539-9eef-555f2ef23ecb-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.234624 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.235131 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-web-config\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.236469 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.243520 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1321027d-6616-4539-9eef-555f2ef23ecb-config-out\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.243774 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-config-volume\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.244203 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.244486 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1321027d-6616-4539-9eef-555f2ef23ecb-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.247686 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.247744 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn8tz\" (UniqueName: \"kubernetes.io/projected/1321027d-6616-4539-9eef-555f2ef23ecb-kube-api-access-vn8tz\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.403207 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f"] Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.403462 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.813317 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" event={"ID":"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7","Type":"ContainerStarted","Data":"383b3b9f387929435084f59da9046b83bf2c5be1da062190b80985e07cb0f308"} Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.813799 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" event={"ID":"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7","Type":"ContainerStarted","Data":"292b7cd50df079bb29727f9c2491c9917315f95ca7bb8f2e419a217cdab4390a"} Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.816793 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" event={"ID":"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73","Type":"ContainerStarted","Data":"63db952d227ebde5b3dda0cbbb8fc7d5eb81f5b1dfbd7a919ad9e688f2e163fa"} Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.044917 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5695687f7c-8tcz2"] Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.050292 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.053202 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.054108 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.054230 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.054243 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.054625 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-sl5xz" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.054820 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-64rgvnkqk08fr" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.057304 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.074380 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5695687f7c-8tcz2"] Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.121868 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141192 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141331 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-tls\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141378 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wprbf\" (UniqueName: \"kubernetes.io/projected/1a0dd00c-a59d-4e21-968c-b1a7b1198758-kube-api-access-wprbf\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141469 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141544 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-grpc-tls\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141611 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0dd00c-a59d-4e21-968c-b1a7b1198758-metrics-client-ca\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141701 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141768 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243387 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243494 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-tls\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243531 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wprbf\" (UniqueName: \"kubernetes.io/projected/1a0dd00c-a59d-4e21-968c-b1a7b1198758-kube-api-access-wprbf\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243559 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243607 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-grpc-tls\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243650 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0dd00c-a59d-4e21-968c-b1a7b1198758-metrics-client-ca\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243743 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243821 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.244907 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0dd00c-a59d-4e21-968c-b1a7b1198758-metrics-client-ca\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.253013 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.253013 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.253434 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.253637 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-grpc-tls\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.255138 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-tls\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.257942 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.265306 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wprbf\" (UniqueName: \"kubernetes.io/projected/1a0dd00c-a59d-4e21-968c-b1a7b1198758-kube-api-access-wprbf\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.371547 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.811013 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5695687f7c-8tcz2"] Jan 28 18:20:00 crc kubenswrapper[4985]: W0128 18:20:00.816029 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a0dd00c_a59d_4e21_968c_b1a7b1198758.slice/crio-f9d8a5055415c952ed46b7ea6b05f1a426365e5422135a91f4a09bcd53f7cc92 WatchSource:0}: Error finding container f9d8a5055415c952ed46b7ea6b05f1a426365e5422135a91f4a09bcd53f7cc92: Status 404 returned error can't find the container with id f9d8a5055415c952ed46b7ea6b05f1a426365e5422135a91f4a09bcd53f7cc92 Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.826128 4985 generic.go:334] "Generic (PLEG): container finished" podID="3d51c83d-3649-47dc-84a7-696f09f28238" containerID="28a2d278450a2c0cc5e014ee9a8495af198fadbb119e92489df408ebfcc21209" exitCode=0 Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.826210 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zc8rm" event={"ID":"3d51c83d-3649-47dc-84a7-696f09f28238","Type":"ContainerDied","Data":"28a2d278450a2c0cc5e014ee9a8495af198fadbb119e92489df408ebfcc21209"} Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.833424 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"e4c6aa85ce23ef513dc4565b8a30dc7b0b0cf648cc0b85ecf552de24b6f2e9aa"} Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.839327 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"f9d8a5055415c952ed46b7ea6b05f1a426365e5422135a91f4a09bcd53f7cc92"} Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.726337 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67787765c4-69gqs"] Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.727679 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.738682 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67787765c4-69gqs"] Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788086 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-service-ca\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788138 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-oauth-config\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788167 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-serving-cert\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788186 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-trusted-ca-bundle\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788353 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4fqm\" (UniqueName: \"kubernetes.io/projected/c6ceb598-f81e-4169-acfd-ab2c8c776842-kube-api-access-d4fqm\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788588 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-config\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788696 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-oauth-serving-cert\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.890862 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4fqm\" (UniqueName: \"kubernetes.io/projected/c6ceb598-f81e-4169-acfd-ab2c8c776842-kube-api-access-d4fqm\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.890975 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-config\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.891016 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-oauth-serving-cert\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.891041 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-service-ca\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.891067 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-oauth-config\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.891089 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-serving-cert\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.891105 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-trusted-ca-bundle\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.892485 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-trusted-ca-bundle\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.892497 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-config\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.892951 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-service-ca\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.893407 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-oauth-serving-cert\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.898304 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-serving-cert\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.898475 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-oauth-config\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.910188 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4fqm\" (UniqueName: \"kubernetes.io/projected/c6ceb598-f81e-4169-acfd-ab2c8c776842-kube-api-access-d4fqm\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.051167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.346945 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-6845d579bb-9lznf"] Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.348584 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.351055 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-5vgqq" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.351152 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.351162 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-1vakj0kiaupde" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.351884 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.352624 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.355359 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.377089 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6845d579bb-9lznf"] Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.396587 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.396662 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2kfm\" (UniqueName: \"kubernetes.io/projected/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-kube-api-access-w2kfm\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.396789 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-audit-log\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.396934 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-client-ca-bundle\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.397009 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-secret-metrics-server-tls\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.397035 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-secret-metrics-client-certs\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.397067 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-metrics-server-audit-profiles\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498506 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-secret-metrics-server-tls\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498564 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-secret-metrics-client-certs\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498590 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-metrics-server-audit-profiles\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498656 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498711 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2kfm\" (UniqueName: \"kubernetes.io/projected/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-kube-api-access-w2kfm\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498741 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-audit-log\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498787 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-client-ca-bundle\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.499671 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-audit-log\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.500221 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.500304 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-metrics-server-audit-profiles\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.503399 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-client-ca-bundle\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.507893 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-secret-metrics-server-tls\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.508636 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-secret-metrics-client-certs\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.519536 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2kfm\" (UniqueName: \"kubernetes.io/projected/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-kube-api-access-w2kfm\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.765004 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.764560 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl"] Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.775512 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.780772 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.782172 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.792272 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl"] Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.986595 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54abc3c0-c9d2-49a3-bc29-854369637b99-monitoring-plugin-cert\") pod \"monitoring-plugin-868c9846bf-6bwkl\" (UID: \"54abc3c0-c9d2-49a3-bc29-854369637b99\") " pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.088571 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54abc3c0-c9d2-49a3-bc29-854369637b99-monitoring-plugin-cert\") pod \"monitoring-plugin-868c9846bf-6bwkl\" (UID: \"54abc3c0-c9d2-49a3-bc29-854369637b99\") " pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.094414 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54abc3c0-c9d2-49a3-bc29-854369637b99-monitoring-plugin-cert\") pod \"monitoring-plugin-868c9846bf-6bwkl\" (UID: \"54abc3c0-c9d2-49a3-bc29-854369637b99\") " pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.104750 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.429241 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.431868 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.438657 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.438937 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-psvl8" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.439150 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.439335 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.439493 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.443631 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.443692 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.443785 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-dji3dhnh09eo0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.443812 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.443808 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.444317 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.444345 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.446452 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.461823 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598388 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598446 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-config-out\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598473 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598495 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598539 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598580 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598594 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99stz\" (UniqueName: \"kubernetes.io/projected/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-kube-api-access-99stz\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598611 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-config\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598637 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598658 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598675 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-web-config\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598697 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598720 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598741 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598770 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598787 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598803 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598823 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.617936 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6845d579bb-9lznf"] Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700494 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700548 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700572 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-web-config\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700603 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700630 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700653 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700689 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700756 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700779 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700803 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700841 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700874 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-config-out\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700900 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700927 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700949 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.701008 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99stz\" (UniqueName: \"kubernetes.io/projected/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-kube-api-access-99stz\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.701029 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-config\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.702054 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.702158 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.702290 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.702474 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.703996 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.712953 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.712965 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.712970 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-web-config\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.713203 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.714227 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-config-out\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.714625 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.714637 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.715053 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.715996 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-config\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.716627 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.717715 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67787765c4-69gqs"] Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.720151 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.721801 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.721938 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99stz\" (UniqueName: \"kubernetes.io/projected/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-kube-api-access-99stz\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: W0128 18:20:04.729730 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6ceb598_f81e_4169_acfd_ab2c8c776842.slice/crio-bdbe6f2aec65bc58869dd434608fa821e03e84b7c37f1ceb2deadfec161fa8fd WatchSource:0}: Error finding container bdbe6f2aec65bc58869dd434608fa821e03e84b7c37f1ceb2deadfec161fa8fd: Status 404 returned error can't find the container with id bdbe6f2aec65bc58869dd434608fa821e03e84b7c37f1ceb2deadfec161fa8fd Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.738413 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl"] Jan 28 18:20:04 crc kubenswrapper[4985]: W0128 18:20:04.742119 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54abc3c0_c9d2_49a3_bc29_854369637b99.slice/crio-99c213063f5c52fcb703b79b41ef61f758fee113aa443d5773092a141a5f7243 WatchSource:0}: Error finding container 99c213063f5c52fcb703b79b41ef61f758fee113aa443d5773092a141a5f7243: Status 404 returned error can't find the container with id 99c213063f5c52fcb703b79b41ef61f758fee113aa443d5773092a141a5f7243 Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.771347 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.874021 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" event={"ID":"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde","Type":"ContainerStarted","Data":"bcade0b67e184262ccbde20e5f5bf5c5baf7b03fe84ea271ec5e44a43d3ba1cc"} Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.877281 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zc8rm" event={"ID":"3d51c83d-3649-47dc-84a7-696f09f28238","Type":"ContainerStarted","Data":"d08ad77c9136e37a1d4202bf2af12cc700af44e341ccbbe505825f2cc0c51b8b"} Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.879503 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67787765c4-69gqs" event={"ID":"c6ceb598-f81e-4169-acfd-ab2c8c776842","Type":"ContainerStarted","Data":"bdbe6f2aec65bc58869dd434608fa821e03e84b7c37f1ceb2deadfec161fa8fd"} Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.881735 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" event={"ID":"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7","Type":"ContainerStarted","Data":"0c418790bdc3f1cab88023dea9fbbb624dc63764dda6954f145d0f9ccbb7443f"} Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.883547 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" event={"ID":"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73","Type":"ContainerStarted","Data":"e8ebc1be9c061cfa9d730422c9bdec2125f6bf48a63b8b299144374ad79adbc4"} Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.884610 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" event={"ID":"54abc3c0-c9d2-49a3-bc29-854369637b99","Type":"ContainerStarted","Data":"99c213063f5c52fcb703b79b41ef61f758fee113aa443d5773092a141a5f7243"} Jan 28 18:20:08 crc kubenswrapper[4985]: I0128 18:20:05.896642 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67787765c4-69gqs" event={"ID":"c6ceb598-f81e-4169-acfd-ab2c8c776842","Type":"ContainerStarted","Data":"8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f"} Jan 28 18:20:08 crc kubenswrapper[4985]: I0128 18:20:05.951167 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" podStartSLOduration=4.074891625 podStartE2EDuration="8.951133576s" podCreationTimestamp="2026-01-28 18:19:57 +0000 UTC" firstStartedPulling="2026-01-28 18:19:59.211984627 +0000 UTC m=+410.038547448" lastFinishedPulling="2026-01-28 18:20:04.088226578 +0000 UTC m=+414.914789399" observedRunningTime="2026-01-28 18:20:05.924720889 +0000 UTC m=+416.751283710" watchObservedRunningTime="2026-01-28 18:20:05.951133576 +0000 UTC m=+416.777696417" Jan 28 18:20:08 crc kubenswrapper[4985]: I0128 18:20:05.959057 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67787765c4-69gqs" podStartSLOduration=3.959029782 podStartE2EDuration="3.959029782s" podCreationTimestamp="2026-01-28 18:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:20:05.944320111 +0000 UTC m=+416.770882952" watchObservedRunningTime="2026-01-28 18:20:05.959029782 +0000 UTC m=+416.785592613" Jan 28 18:20:08 crc kubenswrapper[4985]: I0128 18:20:08.966930 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 28 18:20:10 crc kubenswrapper[4985]: I0128 18:20:10.934534 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"95d3dcc3dc6724c73db9e012ed32d1a45c090e852a22b2a26b9416bc53219423"} Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.186069 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.186151 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.186216 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.187050 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"593af0e54c9d9c5d6a1c9d6b82650336d416f9c59d7bd7f797ef21c62cc91daf"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.187106 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://593af0e54c9d9c5d6a1c9d6b82650336d416f9c59d7bd7f797ef21c62cc91daf" gracePeriod=600 Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.943904 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="593af0e54c9d9c5d6a1c9d6b82650336d416f9c59d7bd7f797ef21c62cc91daf" exitCode=0 Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.944031 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"593af0e54c9d9c5d6a1c9d6b82650336d416f9c59d7bd7f797ef21c62cc91daf"} Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.944435 4985 scope.go:117] "RemoveContainer" containerID="7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa" Jan 28 18:20:12 crc kubenswrapper[4985]: I0128 18:20:12.951718 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"a378f884ff1c0ba91e84019919ea9054d6ce5924384bb989e907966b0505fbd9"} Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.051440 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.051613 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.059748 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.964322 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" event={"ID":"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73","Type":"ContainerStarted","Data":"0f314032d2d0dad58816b68834f071702110a56bcb3a6cd46dee7b72233c9a13"} Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.969933 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zc8rm" event={"ID":"3d51c83d-3649-47dc-84a7-696f09f28238","Type":"ContainerStarted","Data":"7ad94a5888c7abd7e46fcc1e071bb17e06c0684ed49fb3889ddb377fe42df8bc"} Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.975153 4985 generic.go:334] "Generic (PLEG): container finished" podID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerID="adbac7ee6898806b48324e26df1522d5acab80a3215e82dff7f7129f07c05432" exitCode=0 Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.975295 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerDied","Data":"adbac7ee6898806b48324e26df1522d5acab80a3215e82dff7f7129f07c05432"} Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.979529 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"7f63b5a5d82d462357c3a92eda8a9e8dafecb82cb35862cc75804b4a50b4c56e"} Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.983043 4985 generic.go:334] "Generic (PLEG): container finished" podID="1321027d-6616-4539-9eef-555f2ef23ecb" containerID="8c304a35e184693ad32049c64ace225c07a9f0acca7de0da90d9e220f5938dc4" exitCode=0 Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.983221 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerDied","Data":"8c304a35e184693ad32049c64ace225c07a9f0acca7de0da90d9e220f5938dc4"} Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.989068 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:14 crc kubenswrapper[4985]: I0128 18:20:13.999819 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-zc8rm" podStartSLOduration=15.637242695 podStartE2EDuration="16.999791698s" podCreationTimestamp="2026-01-28 18:19:57 +0000 UTC" firstStartedPulling="2026-01-28 18:19:58.365013314 +0000 UTC m=+409.191576135" lastFinishedPulling="2026-01-28 18:19:59.727562317 +0000 UTC m=+410.554125138" observedRunningTime="2026-01-28 18:20:13.994100225 +0000 UTC m=+424.820663066" watchObservedRunningTime="2026-01-28 18:20:13.999791698 +0000 UTC m=+424.826354519" Jan 28 18:20:14 crc kubenswrapper[4985]: I0128 18:20:14.145163 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-b5t5k"] Jan 28 18:20:15 crc kubenswrapper[4985]: I0128 18:20:15.633515 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:20:15 crc kubenswrapper[4985]: I0128 18:20:15.740743 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4k6qp"] Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.614635 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5whpv"] Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.617972 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.621990 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.630503 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5whpv"] Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.729279 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cad9e98-172d-4053-83a3-ebee724a6d9c-catalog-content\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.729380 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cad9e98-172d-4053-83a3-ebee724a6d9c-utilities\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.729635 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqblq\" (UniqueName: \"kubernetes.io/projected/5cad9e98-172d-4053-83a3-ebee724a6d9c-kube-api-access-sqblq\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.813759 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mclkd"] Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.815213 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.818584 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mclkd"] Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.818755 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.831280 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cad9e98-172d-4053-83a3-ebee724a6d9c-utilities\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.831390 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqblq\" (UniqueName: \"kubernetes.io/projected/5cad9e98-172d-4053-83a3-ebee724a6d9c-kube-api-access-sqblq\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.831593 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cad9e98-172d-4053-83a3-ebee724a6d9c-catalog-content\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.831625 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cad9e98-172d-4053-83a3-ebee724a6d9c-utilities\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.832496 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cad9e98-172d-4053-83a3-ebee724a6d9c-catalog-content\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.866277 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqblq\" (UniqueName: \"kubernetes.io/projected/5cad9e98-172d-4053-83a3-ebee724a6d9c-kube-api-access-sqblq\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.933617 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nhmt\" (UniqueName: \"kubernetes.io/projected/1304efc2-5033-41c5-83b5-5df3edfde2f1-kube-api-access-4nhmt\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.933706 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-catalog-content\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.933770 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-utilities\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.952142 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.034982 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nhmt\" (UniqueName: \"kubernetes.io/projected/1304efc2-5033-41c5-83b5-5df3edfde2f1-kube-api-access-4nhmt\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.035057 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-catalog-content\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.035123 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-utilities\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.035911 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-utilities\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.036121 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-catalog-content\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.053986 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nhmt\" (UniqueName: \"kubernetes.io/projected/1304efc2-5033-41c5-83b5-5df3edfde2f1-kube-api-access-4nhmt\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.132649 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.673411 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5whpv"] Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.706238 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mclkd"] Jan 28 18:20:18 crc kubenswrapper[4985]: I0128 18:20:18.024875 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" event={"ID":"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73","Type":"ContainerStarted","Data":"184262c62d244fdfdd37aba42ec0320e853bbdc7b80e58a05161bff9dda86f7a"} Jan 28 18:20:18 crc kubenswrapper[4985]: I0128 18:20:18.028714 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" event={"ID":"54abc3c0-c9d2-49a3-bc29-854369637b99","Type":"ContainerStarted","Data":"93ac1d0cc7c88b5c3c834f75aa3e35ddcd99bc494ac09081e5c790cf3de54755"} Jan 28 18:20:18 crc kubenswrapper[4985]: I0128 18:20:18.028942 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:18 crc kubenswrapper[4985]: I0128 18:20:18.032481 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"dc45b7824da10c6dc1f43a74348d32505c5f1fb53beb023d3d1f41d1deefa38f"} Jan 28 18:20:18 crc kubenswrapper[4985]: I0128 18:20:18.039348 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:18 crc kubenswrapper[4985]: I0128 18:20:18.048025 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" podStartSLOduration=2.6884907 podStartE2EDuration="15.048011017s" podCreationTimestamp="2026-01-28 18:20:03 +0000 UTC" firstStartedPulling="2026-01-28 18:20:04.751456798 +0000 UTC m=+415.578019609" lastFinishedPulling="2026-01-28 18:20:17.110977105 +0000 UTC m=+427.937539926" observedRunningTime="2026-01-28 18:20:18.044400774 +0000 UTC m=+428.870963595" watchObservedRunningTime="2026-01-28 18:20:18.048011017 +0000 UTC m=+428.874573838" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.004690 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z2xq5"] Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.015845 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z2xq5"] Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.016049 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.022746 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.047443 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5whpv" event={"ID":"5cad9e98-172d-4053-83a3-ebee724a6d9c","Type":"ContainerStarted","Data":"7551215f48c6a8439a1b9b8e99500ee1a2e82e6cca161bb1872b67e7ca8260b3"} Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.052272 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerStarted","Data":"9065c3cedcf2c522ec02096a476095855bf69695fefcb13d3535bb45ef54bf89"} Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.171801 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v7m8\" (UniqueName: \"kubernetes.io/projected/d59677ee-1cc3-4635-a126-0383e56d3fc0-kube-api-access-9v7m8\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.171933 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59677ee-1cc3-4635-a126-0383e56d3fc0-catalog-content\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.171990 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59677ee-1cc3-4635-a126-0383e56d3fc0-utilities\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.193351 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" podStartSLOduration=17.505167224 podStartE2EDuration="22.193325607s" podCreationTimestamp="2026-01-28 18:19:57 +0000 UTC" firstStartedPulling="2026-01-28 18:19:59.411301127 +0000 UTC m=+410.237863948" lastFinishedPulling="2026-01-28 18:20:04.09945951 +0000 UTC m=+414.926022331" observedRunningTime="2026-01-28 18:20:19.077620763 +0000 UTC m=+429.904183584" watchObservedRunningTime="2026-01-28 18:20:19.193325607 +0000 UTC m=+430.019888428" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.196410 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4fx27"] Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.197786 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.202980 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.213044 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fx27"] Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273172 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59677ee-1cc3-4635-a126-0383e56d3fc0-catalog-content\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273220 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478fc51e-7963-4ba3-a5ec-c2b7045b8353-catalog-content\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273333 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59677ee-1cc3-4635-a126-0383e56d3fc0-utilities\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273417 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v7m8\" (UniqueName: \"kubernetes.io/projected/d59677ee-1cc3-4635-a126-0383e56d3fc0-kube-api-access-9v7m8\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273440 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb6bm\" (UniqueName: \"kubernetes.io/projected/478fc51e-7963-4ba3-a5ec-c2b7045b8353-kube-api-access-wb6bm\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273463 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478fc51e-7963-4ba3-a5ec-c2b7045b8353-utilities\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273976 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59677ee-1cc3-4635-a126-0383e56d3fc0-catalog-content\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273985 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59677ee-1cc3-4635-a126-0383e56d3fc0-utilities\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.294896 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v7m8\" (UniqueName: \"kubernetes.io/projected/d59677ee-1cc3-4635-a126-0383e56d3fc0-kube-api-access-9v7m8\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.345468 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.375643 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb6bm\" (UniqueName: \"kubernetes.io/projected/478fc51e-7963-4ba3-a5ec-c2b7045b8353-kube-api-access-wb6bm\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.375713 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478fc51e-7963-4ba3-a5ec-c2b7045b8353-utilities\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.375783 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478fc51e-7963-4ba3-a5ec-c2b7045b8353-catalog-content\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.376392 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478fc51e-7963-4ba3-a5ec-c2b7045b8353-catalog-content\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.376503 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478fc51e-7963-4ba3-a5ec-c2b7045b8353-utilities\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.404471 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb6bm\" (UniqueName: \"kubernetes.io/projected/478fc51e-7963-4ba3-a5ec-c2b7045b8353-kube-api-access-wb6bm\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.528860 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.808552 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z2xq5"] Jan 28 18:20:19 crc kubenswrapper[4985]: W0128 18:20:19.818167 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd59677ee_1cc3_4635_a126_0383e56d3fc0.slice/crio-8683c645dd98948d2659c44693aae32885ef9dce31f0ab822a262cfa7cafa553 WatchSource:0}: Error finding container 8683c645dd98948d2659c44693aae32885ef9dce31f0ab822a262cfa7cafa553: Status 404 returned error can't find the container with id 8683c645dd98948d2659c44693aae32885ef9dce31f0ab822a262cfa7cafa553 Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.001249 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fx27"] Jan 28 18:20:20 crc kubenswrapper[4985]: W0128 18:20:20.010718 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod478fc51e_7963_4ba3_a5ec_c2b7045b8353.slice/crio-31abe548a91dfa3cf866bfa3e678a15fcc46733ebcd5a0f38cecf26186d89b19 WatchSource:0}: Error finding container 31abe548a91dfa3cf866bfa3e678a15fcc46733ebcd5a0f38cecf26186d89b19: Status 404 returned error can't find the container with id 31abe548a91dfa3cf866bfa3e678a15fcc46733ebcd5a0f38cecf26186d89b19 Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.061602 4985 generic.go:334] "Generic (PLEG): container finished" podID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerID="9fb725b7927bf308d0c769e88cf67812255b9577d22dfa62ad7023f08bc0245b" exitCode=0 Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.061707 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5whpv" event={"ID":"5cad9e98-172d-4053-83a3-ebee724a6d9c","Type":"ContainerDied","Data":"9fb725b7927bf308d0c769e88cf67812255b9577d22dfa62ad7023f08bc0245b"} Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.075175 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" event={"ID":"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde","Type":"ContainerStarted","Data":"7dd77068bf3eb2a91485c6b77d6e558f0ea9cb261db063d16cb699f2d789cd1d"} Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.088505 4985 generic.go:334] "Generic (PLEG): container finished" podID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerID="14a134cc6d453f346b75c36ad477bc28fbbffdb8a4403d5d30532b761990a0da" exitCode=0 Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.088618 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerDied","Data":"14a134cc6d453f346b75c36ad477bc28fbbffdb8a4403d5d30532b761990a0da"} Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.097796 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerStarted","Data":"8683c645dd98948d2659c44693aae32885ef9dce31f0ab822a262cfa7cafa553"} Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.100971 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerStarted","Data":"31abe548a91dfa3cf866bfa3e678a15fcc46733ebcd5a0f38cecf26186d89b19"} Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.108370 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"449c9e01d828adf7beba9fe6a01be63b42c205583713f4a65937700457da64d2"} Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.123858 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podStartSLOduration=2.970504671 podStartE2EDuration="17.123830954s" podCreationTimestamp="2026-01-28 18:20:03 +0000 UTC" firstStartedPulling="2026-01-28 18:20:04.635096035 +0000 UTC m=+415.461658856" lastFinishedPulling="2026-01-28 18:20:18.788422318 +0000 UTC m=+429.614985139" observedRunningTime="2026-01-28 18:20:20.117980866 +0000 UTC m=+430.944543707" watchObservedRunningTime="2026-01-28 18:20:20.123830954 +0000 UTC m=+430.950393805" Jan 28 18:20:21 crc kubenswrapper[4985]: I0128 18:20:21.119065 4985 generic.go:334] "Generic (PLEG): container finished" podID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerID="4c22c62c46381126d354905932ce4d5fa34a0b3162f09f4ea38da18f6853bedc" exitCode=0 Jan 28 18:20:21 crc kubenswrapper[4985]: I0128 18:20:21.119297 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerDied","Data":"4c22c62c46381126d354905932ce4d5fa34a0b3162f09f4ea38da18f6853bedc"} Jan 28 18:20:21 crc kubenswrapper[4985]: I0128 18:20:21.124950 4985 generic.go:334] "Generic (PLEG): container finished" podID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerID="823e2b1b71b59f463d5bbf67578899e292949931e58a5f6ad2ef4edbe6d5b960" exitCode=0 Jan 28 18:20:21 crc kubenswrapper[4985]: I0128 18:20:21.125113 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerDied","Data":"823e2b1b71b59f463d5bbf67578899e292949931e58a5f6ad2ef4edbe6d5b960"} Jan 28 18:20:23 crc kubenswrapper[4985]: I0128 18:20:23.767560 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:23 crc kubenswrapper[4985]: I0128 18:20:23.768761 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:29 crc kubenswrapper[4985]: I0128 18:20:29.198035 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5whpv" event={"ID":"5cad9e98-172d-4053-83a3-ebee724a6d9c","Type":"ContainerStarted","Data":"e82e10f5d58ff6df3e265f1309f4b647f09e3bff2517a3cfe802376ea4837d61"} Jan 28 18:20:29 crc kubenswrapper[4985]: I0128 18:20:29.202203 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerStarted","Data":"13c932ede5b3e566b7752d12093b1dd4c26483b9039f367f6e4ba1e8e603bf3f"} Jan 28 18:20:29 crc kubenswrapper[4985]: I0128 18:20:29.204526 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"c79998fee84ab3dc59da5883adce38f31b241d4a95cdb40df3cc765408d1dd9d"} Jan 28 18:20:29 crc kubenswrapper[4985]: I0128 18:20:29.206266 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"90e079a4446c8b474c23d1d3b8fbedc0b9494e5d17b446ba41ad9106fe2c5b92"} Jan 28 18:20:29 crc kubenswrapper[4985]: I0128 18:20:29.208012 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerStarted","Data":"7cd224f3704fa894f5a8615b761322d145f0dd17fe13bc47dafdab9320f11378"} Jan 28 18:20:29 crc kubenswrapper[4985]: I0128 18:20:29.210303 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerStarted","Data":"db9a004e1c5a7dc3f2ee0e744da5f06fe090c8dd6d3fbb3e47a00b888ddbf7d7"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.224204 4985 generic.go:334] "Generic (PLEG): container finished" podID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerID="13c932ede5b3e566b7752d12093b1dd4c26483b9039f367f6e4ba1e8e603bf3f" exitCode=0 Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.224338 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerDied","Data":"13c932ede5b3e566b7752d12093b1dd4c26483b9039f367f6e4ba1e8e603bf3f"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.228468 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"fb8a9c2304bf6f66244b478879235230db7c610d570dea6d124039c7522384b6"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.240506 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"1b877fa7d8957f795b1e4d757b81af0710f69a9bba74b471a2e41dc109f1813c"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.263618 4985 generic.go:334] "Generic (PLEG): container finished" podID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerID="7cd224f3704fa894f5a8615b761322d145f0dd17fe13bc47dafdab9320f11378" exitCode=0 Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.263628 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerDied","Data":"7cd224f3704fa894f5a8615b761322d145f0dd17fe13bc47dafdab9320f11378"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.274016 4985 generic.go:334] "Generic (PLEG): container finished" podID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerID="db9a004e1c5a7dc3f2ee0e744da5f06fe090c8dd6d3fbb3e47a00b888ddbf7d7" exitCode=0 Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.274152 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerDied","Data":"db9a004e1c5a7dc3f2ee0e744da5f06fe090c8dd6d3fbb3e47a00b888ddbf7d7"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.290209 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"d88cf53b73bae3057faba92c63ccca730cfe5c01f975c73ab0f89f9a55588049"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.298787 4985 generic.go:334] "Generic (PLEG): container finished" podID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerID="e82e10f5d58ff6df3e265f1309f4b647f09e3bff2517a3cfe802376ea4837d61" exitCode=0 Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.298909 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5whpv" event={"ID":"5cad9e98-172d-4053-83a3-ebee724a6d9c","Type":"ContainerDied","Data":"e82e10f5d58ff6df3e265f1309f4b647f09e3bff2517a3cfe802376ea4837d61"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.310020 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerStarted","Data":"f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.313955 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"e783f67f621c68c3a3e9b3123918004c596e8616a65a72e419519d463b8235a6"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.313999 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"cb50d6901d948ecde4675484c755ac429cbcfbe3f5906639d0d21e77b9bcc6c4"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.315482 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.318428 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerStarted","Data":"d1f355fd0c5fb9871aa2c5c6896e3fe364696f87e04f69db46add5786f956fc8"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.323922 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"b4f332385a51a29e5b49b67fee7d25671a1611c41938c82f993c1577b5fb006c"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.323971 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"29c6df97dc0932f2f4a72f8b1540034f084814f47a2b3b915df7e42676f72b43"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.323985 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"ccade00460d333725457a17c55a6a611b5d19a2d263e54b666b27cc9d7fec666"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.324000 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"12109e23795aa940c009ff928ffb111e8f0605a1b584c2c9d3d93feb16fcd92d"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.329841 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"027202c651a9e5c3d0d918f93c0f13bd734f866786ea48de1f14d34578d0424c"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.329937 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"454dd8faa4ad50b9d7238141ecc2c0f2932b318ee28de2fa0a07bf848bd5a5d6"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.329950 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"7c433791c80e7ad566bd2f670ea34379fe6553a42437dcce4fef30a3ef587d2a"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.329960 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"17f03207c8b6d6941e2ab683982f017305e84e97bed651b24e9a28c3b1353d98"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.332722 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerStarted","Data":"acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.334827 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.335427 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4fx27" podStartSLOduration=3.010725417 podStartE2EDuration="12.335415994s" podCreationTimestamp="2026-01-28 18:20:19 +0000 UTC" firstStartedPulling="2026-01-28 18:20:21.364750472 +0000 UTC m=+432.191313293" lastFinishedPulling="2026-01-28 18:20:30.689441039 +0000 UTC m=+441.516003870" observedRunningTime="2026-01-28 18:20:31.329607468 +0000 UTC m=+442.156170309" watchObservedRunningTime="2026-01-28 18:20:31.335415994 +0000 UTC m=+442.161978815" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.358889 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podStartSLOduration=2.298144067 podStartE2EDuration="31.358857226s" podCreationTimestamp="2026-01-28 18:20:00 +0000 UTC" firstStartedPulling="2026-01-28 18:20:00.82308715 +0000 UTC m=+411.649649971" lastFinishedPulling="2026-01-28 18:20:29.883800309 +0000 UTC m=+440.710363130" observedRunningTime="2026-01-28 18:20:31.353899964 +0000 UTC m=+442.180462775" watchObservedRunningTime="2026-01-28 18:20:31.358857226 +0000 UTC m=+442.185420057" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.381004 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mclkd" podStartSLOduration=4.726471298 podStartE2EDuration="15.380970869s" podCreationTimestamp="2026-01-28 18:20:16 +0000 UTC" firstStartedPulling="2026-01-28 18:20:20.093554527 +0000 UTC m=+430.920117348" lastFinishedPulling="2026-01-28 18:20:30.748054088 +0000 UTC m=+441.574616919" observedRunningTime="2026-01-28 18:20:31.373328 +0000 UTC m=+442.199890841" watchObservedRunningTime="2026-01-28 18:20:31.380970869 +0000 UTC m=+442.207533710" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.425158 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=14.303916839 podStartE2EDuration="27.425133954s" podCreationTimestamp="2026-01-28 18:20:04 +0000 UTC" firstStartedPulling="2026-01-28 18:20:13.977962383 +0000 UTC m=+424.804525224" lastFinishedPulling="2026-01-28 18:20:27.099179518 +0000 UTC m=+437.925742339" observedRunningTime="2026-01-28 18:20:31.419095101 +0000 UTC m=+442.245657942" watchObservedRunningTime="2026-01-28 18:20:31.425133954 +0000 UTC m=+442.251696775" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.447174 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z2xq5" podStartSLOduration=3.993549186 podStartE2EDuration="13.447152875s" podCreationTimestamp="2026-01-28 18:20:18 +0000 UTC" firstStartedPulling="2026-01-28 18:20:21.361934922 +0000 UTC m=+432.188497743" lastFinishedPulling="2026-01-28 18:20:30.815538611 +0000 UTC m=+441.642101432" observedRunningTime="2026-01-28 18:20:31.44138589 +0000 UTC m=+442.267948721" watchObservedRunningTime="2026-01-28 18:20:31.447152875 +0000 UTC m=+442.273715696" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.472809 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=6.973572467 podStartE2EDuration="32.472785059s" podCreationTimestamp="2026-01-28 18:19:59 +0000 UTC" firstStartedPulling="2026-01-28 18:20:00.133211577 +0000 UTC m=+410.959774398" lastFinishedPulling="2026-01-28 18:20:25.632424179 +0000 UTC m=+436.458986990" observedRunningTime="2026-01-28 18:20:31.468490046 +0000 UTC m=+442.295052867" watchObservedRunningTime="2026-01-28 18:20:31.472785059 +0000 UTC m=+442.299347870" Jan 28 18:20:32 crc kubenswrapper[4985]: E0128 18:20:32.590837 4985 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: configmap "prometheus-k8s-rulefiles-0" not found Jan 28 18:20:32 crc kubenswrapper[4985]: E0128 18:20:32.591012 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-rulefiles-0 podName:44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9 nodeName:}" failed. No retries permitted until 2026-01-28 18:20:33.090985573 +0000 UTC m=+443.917548604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9") : configmap "prometheus-k8s-rulefiles-0" not found Jan 28 18:20:34 crc kubenswrapper[4985]: I0128 18:20:34.772821 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:36 crc kubenswrapper[4985]: I0128 18:20:36.441440 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5whpv" event={"ID":"5cad9e98-172d-4053-83a3-ebee724a6d9c","Type":"ContainerStarted","Data":"2a4fec7ddb6f9b88bf6eb9d3cb66a2ad0edb77691fda84f03aa283e5cf269853"} Jan 28 18:20:36 crc kubenswrapper[4985]: I0128 18:20:36.486211 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5whpv" podStartSLOduration=4.47659808 podStartE2EDuration="20.486184181s" podCreationTimestamp="2026-01-28 18:20:16 +0000 UTC" firstStartedPulling="2026-01-28 18:20:20.065870954 +0000 UTC m=+430.892433775" lastFinishedPulling="2026-01-28 18:20:36.075457055 +0000 UTC m=+446.902019876" observedRunningTime="2026-01-28 18:20:36.484929995 +0000 UTC m=+447.311492816" watchObservedRunningTime="2026-01-28 18:20:36.486184181 +0000 UTC m=+447.312747022" Jan 28 18:20:36 crc kubenswrapper[4985]: I0128 18:20:36.953472 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:36 crc kubenswrapper[4985]: I0128 18:20:36.953814 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:37 crc kubenswrapper[4985]: I0128 18:20:37.133799 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:37 crc kubenswrapper[4985]: I0128 18:20:37.133853 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:37 crc kubenswrapper[4985]: I0128 18:20:37.195014 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:37 crc kubenswrapper[4985]: I0128 18:20:37.515689 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:37 crc kubenswrapper[4985]: I0128 18:20:37.998519 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5whpv" podUID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerName="registry-server" probeResult="failure" output=< Jan 28 18:20:37 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:20:37 crc kubenswrapper[4985]: > Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.195778 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-b5t5k" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" containerID="cri-o://943b5760deb612fe5b4be1e63f359ae8850d9ab9f8d1a6ec8e6e298f7bb9f887" gracePeriod=15 Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.217758 4985 patch_prober.go:28] interesting pod/console-f9d7485db-b5t5k container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.218320 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-b5t5k" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.346194 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.346294 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.431196 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.510028 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.530005 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.530095 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.574164 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:40 crc kubenswrapper[4985]: I0128 18:20:40.477390 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b5t5k_c7f9c411-3899-4824-a051-b18ad42a950e/console/0.log" Jan 28 18:20:40 crc kubenswrapper[4985]: I0128 18:20:40.477468 4985 generic.go:334] "Generic (PLEG): container finished" podID="c7f9c411-3899-4824-a051-b18ad42a950e" containerID="943b5760deb612fe5b4be1e63f359ae8850d9ab9f8d1a6ec8e6e298f7bb9f887" exitCode=2 Jan 28 18:20:40 crc kubenswrapper[4985]: I0128 18:20:40.477596 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b5t5k" event={"ID":"c7f9c411-3899-4824-a051-b18ad42a950e","Type":"ContainerDied","Data":"943b5760deb612fe5b4be1e63f359ae8850d9ab9f8d1a6ec8e6e298f7bb9f887"} Jan 28 18:20:40 crc kubenswrapper[4985]: I0128 18:20:40.521959 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:40 crc kubenswrapper[4985]: I0128 18:20:40.790801 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" podUID="23852c5a-64eb-4a56-8fbb-2e91b16a8429" containerName="registry" containerID="cri-o://2385b533945171f57d477a41059659216495ddfbdd0280843de749e41c577829" gracePeriod=30 Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.657372 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b5t5k_c7f9c411-3899-4824-a051-b18ad42a950e/console/0.log" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.657871 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855006 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-oauth-config\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855133 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-service-ca\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855217 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-oauth-serving-cert\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855286 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-trusted-ca-bundle\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855349 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-serving-cert\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855378 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-console-config\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855437 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dbkv\" (UniqueName: \"kubernetes.io/projected/c7f9c411-3899-4824-a051-b18ad42a950e-kube-api-access-2dbkv\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.856527 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-service-ca" (OuterVolumeSpecName: "service-ca") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.856520 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.856625 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.856680 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-console-config" (OuterVolumeSpecName: "console-config") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.862900 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.864165 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.865484 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f9c411-3899-4824-a051-b18ad42a950e-kube-api-access-2dbkv" (OuterVolumeSpecName: "kube-api-access-2dbkv") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "kube-api-access-2dbkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957604 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957653 4985 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957666 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957678 4985 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957690 4985 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957699 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dbkv\" (UniqueName: \"kubernetes.io/projected/c7f9c411-3899-4824-a051-b18ad42a950e-kube-api-access-2dbkv\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957711 4985 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.495215 4985 generic.go:334] "Generic (PLEG): container finished" podID="23852c5a-64eb-4a56-8fbb-2e91b16a8429" containerID="2385b533945171f57d477a41059659216495ddfbdd0280843de749e41c577829" exitCode=0 Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.495299 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" event={"ID":"23852c5a-64eb-4a56-8fbb-2e91b16a8429","Type":"ContainerDied","Data":"2385b533945171f57d477a41059659216495ddfbdd0280843de749e41c577829"} Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.497892 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b5t5k_c7f9c411-3899-4824-a051-b18ad42a950e/console/0.log" Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.497964 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b5t5k" event={"ID":"c7f9c411-3899-4824-a051-b18ad42a950e","Type":"ContainerDied","Data":"0c4fa24c07af4cdb6a65715225f501e2d489d532f902d5a36a0225bc9b457962"} Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.498012 4985 scope.go:117] "RemoveContainer" containerID="943b5760deb612fe5b4be1e63f359ae8850d9ab9f8d1a6ec8e6e298f7bb9f887" Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.498037 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.537501 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-b5t5k"] Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.547146 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-b5t5k"] Jan 28 18:20:43 crc kubenswrapper[4985]: I0128 18:20:43.279680 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" path="/var/lib/kubelet/pods/c7f9c411-3899-4824-a051-b18ad42a950e/volumes" Jan 28 18:20:43 crc kubenswrapper[4985]: I0128 18:20:43.774639 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:43 crc kubenswrapper[4985]: I0128 18:20:43.781771 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:43 crc kubenswrapper[4985]: I0128 18:20:43.958002 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.093756 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/23852c5a-64eb-4a56-8fbb-2e91b16a8429-ca-trust-extracted\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094035 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094093 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/23852c5a-64eb-4a56-8fbb-2e91b16a8429-installation-pull-secrets\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094130 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-tls\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094154 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppzfl\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-kube-api-access-ppzfl\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094213 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-trusted-ca\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094232 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-bound-sa-token\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094328 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-certificates\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.095106 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.095539 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.099809 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.100163 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23852c5a-64eb-4a56-8fbb-2e91b16a8429-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.100392 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-kube-api-access-ppzfl" (OuterVolumeSpecName: "kube-api-access-ppzfl") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "kube-api-access-ppzfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.103873 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.111946 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.117932 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23852c5a-64eb-4a56-8fbb-2e91b16a8429-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195846 4985 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195894 4985 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/23852c5a-64eb-4a56-8fbb-2e91b16a8429-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195904 4985 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/23852c5a-64eb-4a56-8fbb-2e91b16a8429-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195916 4985 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195926 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppzfl\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-kube-api-access-ppzfl\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195934 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195945 4985 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.516003 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" event={"ID":"23852c5a-64eb-4a56-8fbb-2e91b16a8429","Type":"ContainerDied","Data":"718f56cadfa73ec9c883cb72f3a4ad761b62779dbd38dd0559a00a1f1b0a3abc"} Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.516440 4985 scope.go:117] "RemoveContainer" containerID="2385b533945171f57d477a41059659216495ddfbdd0280843de749e41c577829" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.516047 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.554043 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4k6qp"] Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.559119 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4k6qp"] Jan 28 18:20:45 crc kubenswrapper[4985]: I0128 18:20:45.272411 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23852c5a-64eb-4a56-8fbb-2e91b16a8429" path="/var/lib/kubelet/pods/23852c5a-64eb-4a56-8fbb-2e91b16a8429/volumes" Jan 28 18:20:47 crc kubenswrapper[4985]: I0128 18:20:47.025663 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:47 crc kubenswrapper[4985]: I0128 18:20:47.103184 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:21:04 crc kubenswrapper[4985]: I0128 18:21:04.772779 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:21:04 crc kubenswrapper[4985]: I0128 18:21:04.831382 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:21:05 crc kubenswrapper[4985]: I0128 18:21:05.732736 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.334986 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-cd8f6d96f-p5cf4"] Jan 28 18:21:29 crc kubenswrapper[4985]: E0128 18:21:29.336019 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.336035 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" Jan 28 18:21:29 crc kubenswrapper[4985]: E0128 18:21:29.336065 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23852c5a-64eb-4a56-8fbb-2e91b16a8429" containerName="registry" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.336072 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="23852c5a-64eb-4a56-8fbb-2e91b16a8429" containerName="registry" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.336209 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.336220 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="23852c5a-64eb-4a56-8fbb-2e91b16a8429" containerName="registry" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.336837 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.371605 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cd8f6d96f-p5cf4"] Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.491360 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-oauth-serving-cert\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.491767 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-oauth-config\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.491833 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-serving-cert\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.491909 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb29v\" (UniqueName: \"kubernetes.io/projected/a056a5e7-3897-4712-960c-e0211c7b3062-kube-api-access-vb29v\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.491942 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-trusted-ca-bundle\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.491979 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-console-config\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.492005 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-service-ca\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb29v\" (UniqueName: \"kubernetes.io/projected/a056a5e7-3897-4712-960c-e0211c7b3062-kube-api-access-vb29v\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593368 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-trusted-ca-bundle\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593394 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-console-config\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593411 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-service-ca\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593462 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-oauth-serving-cert\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593493 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-oauth-config\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593516 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-serving-cert\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.595618 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-service-ca\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.595643 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-oauth-serving-cert\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.595668 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-trusted-ca-bundle\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.595736 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-console-config\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.607135 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-serving-cert\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.607136 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-oauth-config\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.614785 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb29v\" (UniqueName: \"kubernetes.io/projected/a056a5e7-3897-4712-960c-e0211c7b3062-kube-api-access-vb29v\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.660496 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.892080 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cd8f6d96f-p5cf4"] Jan 28 18:21:30 crc kubenswrapper[4985]: I0128 18:21:30.886129 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cd8f6d96f-p5cf4" event={"ID":"a056a5e7-3897-4712-960c-e0211c7b3062","Type":"ContainerStarted","Data":"12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55"} Jan 28 18:21:30 crc kubenswrapper[4985]: I0128 18:21:30.888076 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cd8f6d96f-p5cf4" event={"ID":"a056a5e7-3897-4712-960c-e0211c7b3062","Type":"ContainerStarted","Data":"6757ef85c9af6b8087e2bbaecccf725d4d9f1d7a4e12622260f4ddbd98525b61"} Jan 28 18:21:39 crc kubenswrapper[4985]: I0128 18:21:39.660771 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:39 crc kubenswrapper[4985]: I0128 18:21:39.662556 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:39 crc kubenswrapper[4985]: I0128 18:21:39.670675 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:39 crc kubenswrapper[4985]: I0128 18:21:39.698591 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-cd8f6d96f-p5cf4" podStartSLOduration=10.698565162 podStartE2EDuration="10.698565162s" podCreationTimestamp="2026-01-28 18:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:21:30.920772599 +0000 UTC m=+501.747335450" watchObservedRunningTime="2026-01-28 18:21:39.698565162 +0000 UTC m=+510.525128013" Jan 28 18:21:39 crc kubenswrapper[4985]: I0128 18:21:39.969949 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:40 crc kubenswrapper[4985]: I0128 18:21:40.057952 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67787765c4-69gqs"] Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.099584 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-67787765c4-69gqs" podUID="c6ceb598-f81e-4169-acfd-ab2c8c776842" containerName="console" containerID="cri-o://8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f" gracePeriod=15 Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.541031 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67787765c4-69gqs_c6ceb598-f81e-4169-acfd-ab2c8c776842/console/0.log" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.541642 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731474 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-trusted-ca-bundle\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731539 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4fqm\" (UniqueName: \"kubernetes.io/projected/c6ceb598-f81e-4169-acfd-ab2c8c776842-kube-api-access-d4fqm\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731563 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-oauth-config\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731626 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-config\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731687 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-oauth-serving-cert\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731772 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-serving-cert\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731837 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-service-ca\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.732934 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-service-ca" (OuterVolumeSpecName: "service-ca") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.733117 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-config" (OuterVolumeSpecName: "console-config") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.733235 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.733446 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.738888 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.739679 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.741576 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ceb598-f81e-4169-acfd-ab2c8c776842-kube-api-access-d4fqm" (OuterVolumeSpecName: "kube-api-access-d4fqm") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "kube-api-access-d4fqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834582 4985 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834789 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834808 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834827 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4fqm\" (UniqueName: \"kubernetes.io/projected/c6ceb598-f81e-4169-acfd-ab2c8c776842-kube-api-access-d4fqm\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834847 4985 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834864 4985 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834880 4985 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.167805 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67787765c4-69gqs_c6ceb598-f81e-4169-acfd-ab2c8c776842/console/0.log" Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.167961 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6ceb598-f81e-4169-acfd-ab2c8c776842" containerID="8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f" exitCode=2 Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.168044 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.168057 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67787765c4-69gqs" event={"ID":"c6ceb598-f81e-4169-acfd-ab2c8c776842","Type":"ContainerDied","Data":"8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f"} Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.168150 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67787765c4-69gqs" event={"ID":"c6ceb598-f81e-4169-acfd-ab2c8c776842","Type":"ContainerDied","Data":"bdbe6f2aec65bc58869dd434608fa821e03e84b7c37f1ceb2deadfec161fa8fd"} Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.168223 4985 scope.go:117] "RemoveContainer" containerID="8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f" Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.194705 4985 scope.go:117] "RemoveContainer" containerID="8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f" Jan 28 18:22:06 crc kubenswrapper[4985]: E0128 18:22:06.195580 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f\": container with ID starting with 8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f not found: ID does not exist" containerID="8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f" Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.195669 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f"} err="failed to get container status \"8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f\": rpc error: code = NotFound desc = could not find container \"8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f\": container with ID starting with 8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f not found: ID does not exist" Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.202988 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67787765c4-69gqs"] Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.207141 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-67787765c4-69gqs"] Jan 28 18:22:07 crc kubenswrapper[4985]: I0128 18:22:07.275769 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ceb598-f81e-4169-acfd-ab2c8c776842" path="/var/lib/kubelet/pods/c6ceb598-f81e-4169-acfd-ab2c8c776842/volumes" Jan 28 18:22:41 crc kubenswrapper[4985]: I0128 18:22:41.186224 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:22:41 crc kubenswrapper[4985]: I0128 18:22:41.187360 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:23:11 crc kubenswrapper[4985]: I0128 18:23:11.185735 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:23:11 crc kubenswrapper[4985]: I0128 18:23:11.186550 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.186697 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.187749 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.187843 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.189028 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f63b5a5d82d462357c3a92eda8a9e8dafecb82cb35862cc75804b4a50b4c56e"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.189166 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://7f63b5a5d82d462357c3a92eda8a9e8dafecb82cb35862cc75804b4a50b4c56e" gracePeriod=600 Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.898388 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="7f63b5a5d82d462357c3a92eda8a9e8dafecb82cb35862cc75804b4a50b4c56e" exitCode=0 Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.898439 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"7f63b5a5d82d462357c3a92eda8a9e8dafecb82cb35862cc75804b4a50b4c56e"} Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.898485 4985 scope.go:117] "RemoveContainer" containerID="593af0e54c9d9c5d6a1c9d6b82650336d416f9c59d7bd7f797ef21c62cc91daf" Jan 28 18:23:42 crc kubenswrapper[4985]: I0128 18:23:42.909652 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"adb4c0ed7f790cd18a413d636ed6bf707c0edf095d524face3ee33b0664e4ff2"} Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.460739 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg"] Jan 28 18:24:16 crc kubenswrapper[4985]: E0128 18:24:16.461569 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ceb598-f81e-4169-acfd-ab2c8c776842" containerName="console" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.461581 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ceb598-f81e-4169-acfd-ab2c8c776842" containerName="console" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.461702 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ceb598-f81e-4169-acfd-ab2c8c776842" containerName="console" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.462536 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.465288 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.477358 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg"] Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.490593 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4j6d\" (UniqueName: \"kubernetes.io/projected/c3ffee15-9ee0-496b-920f-87dd09fd08ec-kube-api-access-d4j6d\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.490684 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.490746 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.592328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4j6d\" (UniqueName: \"kubernetes.io/projected/c3ffee15-9ee0-496b-920f-87dd09fd08ec-kube-api-access-d4j6d\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.592397 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.592438 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.593022 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.593301 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.619744 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4j6d\" (UniqueName: \"kubernetes.io/projected/c3ffee15-9ee0-496b-920f-87dd09fd08ec-kube-api-access-d4j6d\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.782336 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:17 crc kubenswrapper[4985]: I0128 18:24:17.291149 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg"] Jan 28 18:24:18 crc kubenswrapper[4985]: I0128 18:24:18.186858 4985 generic.go:334] "Generic (PLEG): container finished" podID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerID="45d4670b1ff63e8549d859b628e6848fe37b4078f1a01f540b83faa92b3a8bed" exitCode=0 Jan 28 18:24:18 crc kubenswrapper[4985]: I0128 18:24:18.186985 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" event={"ID":"c3ffee15-9ee0-496b-920f-87dd09fd08ec","Type":"ContainerDied","Data":"45d4670b1ff63e8549d859b628e6848fe37b4078f1a01f540b83faa92b3a8bed"} Jan 28 18:24:18 crc kubenswrapper[4985]: I0128 18:24:18.187411 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" event={"ID":"c3ffee15-9ee0-496b-920f-87dd09fd08ec","Type":"ContainerStarted","Data":"254f2190b65ac08c219c05e075f548fc377bb0cbde4613a62a45eaad2b561308"} Jan 28 18:24:18 crc kubenswrapper[4985]: I0128 18:24:18.189212 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:24:19 crc kubenswrapper[4985]: E0128 18:24:19.439894 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3ffee15_9ee0_496b_920f_87dd09fd08ec.slice/crio-6c0595555fe695769c7a9af36fd5893cfae3e92ceb1f67c90b40b527b716cd29.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3ffee15_9ee0_496b_920f_87dd09fd08ec.slice/crio-conmon-6c0595555fe695769c7a9af36fd5893cfae3e92ceb1f67c90b40b527b716cd29.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:24:20 crc kubenswrapper[4985]: I0128 18:24:20.219160 4985 generic.go:334] "Generic (PLEG): container finished" podID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerID="6c0595555fe695769c7a9af36fd5893cfae3e92ceb1f67c90b40b527b716cd29" exitCode=0 Jan 28 18:24:20 crc kubenswrapper[4985]: I0128 18:24:20.219213 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" event={"ID":"c3ffee15-9ee0-496b-920f-87dd09fd08ec","Type":"ContainerDied","Data":"6c0595555fe695769c7a9af36fd5893cfae3e92ceb1f67c90b40b527b716cd29"} Jan 28 18:24:21 crc kubenswrapper[4985]: I0128 18:24:21.229496 4985 generic.go:334] "Generic (PLEG): container finished" podID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerID="2a92611d01914b1660fd1dc8c220df25068014a23c7e0b8c660dc130da89e309" exitCode=0 Jan 28 18:24:21 crc kubenswrapper[4985]: I0128 18:24:21.229611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" event={"ID":"c3ffee15-9ee0-496b-920f-87dd09fd08ec","Type":"ContainerDied","Data":"2a92611d01914b1660fd1dc8c220df25068014a23c7e0b8c660dc130da89e309"} Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.482126 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.592878 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-util\") pod \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.592973 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-bundle\") pod \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.593128 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4j6d\" (UniqueName: \"kubernetes.io/projected/c3ffee15-9ee0-496b-920f-87dd09fd08ec-kube-api-access-d4j6d\") pod \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.595758 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-bundle" (OuterVolumeSpecName: "bundle") pod "c3ffee15-9ee0-496b-920f-87dd09fd08ec" (UID: "c3ffee15-9ee0-496b-920f-87dd09fd08ec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.608115 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ffee15-9ee0-496b-920f-87dd09fd08ec-kube-api-access-d4j6d" (OuterVolumeSpecName: "kube-api-access-d4j6d") pod "c3ffee15-9ee0-496b-920f-87dd09fd08ec" (UID: "c3ffee15-9ee0-496b-920f-87dd09fd08ec"). InnerVolumeSpecName "kube-api-access-d4j6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.612199 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-util" (OuterVolumeSpecName: "util") pod "c3ffee15-9ee0-496b-920f-87dd09fd08ec" (UID: "c3ffee15-9ee0-496b-920f-87dd09fd08ec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.695311 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.695360 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4j6d\" (UniqueName: \"kubernetes.io/projected/c3ffee15-9ee0-496b-920f-87dd09fd08ec-kube-api-access-d4j6d\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.695383 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:23 crc kubenswrapper[4985]: I0128 18:24:23.247084 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" event={"ID":"c3ffee15-9ee0-496b-920f-87dd09fd08ec","Type":"ContainerDied","Data":"254f2190b65ac08c219c05e075f548fc377bb0cbde4613a62a45eaad2b561308"} Jan 28 18:24:23 crc kubenswrapper[4985]: I0128 18:24:23.247137 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="254f2190b65ac08c219c05e075f548fc377bb0cbde4613a62a45eaad2b561308" Jan 28 18:24:23 crc kubenswrapper[4985]: I0128 18:24:23.247221 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.517432 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zd8w7"] Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518417 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-controller" containerID="cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518477 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518495 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="northd" containerID="cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518519 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-node" containerID="cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518484 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="nbdb" containerID="cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518536 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="sbdb" containerID="cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518558 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-acl-logging" containerID="cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.568562 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" containerID="cri-o://e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154" gracePeriod=30 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.302039 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/3.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.304398 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovn-acl-logging/0.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.304846 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovn-controller/0.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305314 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154" exitCode=0 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305336 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049" exitCode=0 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305343 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290" exitCode=0 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305349 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022" exitCode=0 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305355 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07" exitCode=143 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305366 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2" exitCode=143 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305389 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305467 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305482 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305495 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305509 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305522 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305532 4985 scope.go:117] "RemoveContainer" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.307642 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/2.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.308182 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/1.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.308241 4985 generic.go:334] "Generic (PLEG): container finished" podID="14fdd73a-b8dd-42da-88b4-2ccb314c4f7a" containerID="95eb50bd0d67db39cc80a75d4b4c5fb2e77de46dc2c84556d599c22d07b3f535" exitCode=2 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.308282 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerDied","Data":"95eb50bd0d67db39cc80a75d4b4c5fb2e77de46dc2c84556d599c22d07b3f535"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.308866 4985 scope.go:117] "RemoveContainer" containerID="95eb50bd0d67db39cc80a75d4b4c5fb2e77de46dc2c84556d599c22d07b3f535" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.309143 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-g2g4k_openshift-multus(14fdd73a-b8dd-42da-88b4-2ccb314c4f7a)\"" pod="openshift-multus/multus-g2g4k" podUID="14fdd73a-b8dd-42da-88b4-2ccb314c4f7a" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.331125 4985 scope.go:117] "RemoveContainer" containerID="72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.767322 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovn-acl-logging/0.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.768081 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovn-controller/0.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.768477 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794112 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-kubelet\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794182 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-netd\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794217 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-openvswitch\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794232 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-bin\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794276 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-ovn-kubernetes\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794288 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794308 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-script-lib\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794356 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794375 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794400 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovn-node-metrics-cert\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794436 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktbbd\" (UniqueName: \"kubernetes.io/projected/bd7b8cde-d2fe-4842-857e-545172f5bd12-kube-api-access-ktbbd\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794449 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794486 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794513 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-env-overrides\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794638 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-var-lib-cni-networks-ovn-kubernetes\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794672 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-var-lib-openvswitch\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794677 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794727 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-ovn\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794755 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794761 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794795 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-node-log\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794807 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794839 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-netns\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794869 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-systemd-units\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794899 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-log-socket\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794901 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-node-log" (OuterVolumeSpecName: "node-log") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794910 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794925 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-slash\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794929 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794932 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794947 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-systemd\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794975 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-config\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794945 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-log-socket" (OuterVolumeSpecName: "log-socket") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794958 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-slash" (OuterVolumeSpecName: "host-slash") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795025 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-etc-openvswitch\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795053 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795290 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795391 4985 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795410 4985 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795422 4985 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795434 4985 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795446 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795457 4985 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795471 4985 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795480 4985 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795489 4985 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795497 4985 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795506 4985 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795515 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795523 4985 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795533 4985 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795542 4985 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795555 4985 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795565 4985 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.803627 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7b8cde-d2fe-4842-857e-545172f5bd12-kube-api-access-ktbbd" (OuterVolumeSpecName: "kube-api-access-ktbbd") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "kube-api-access-ktbbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.821546 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.831627 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.901632 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.901672 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktbbd\" (UniqueName: \"kubernetes.io/projected/bd7b8cde-d2fe-4842-857e-545172f5bd12-kube-api-access-ktbbd\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.901683 4985 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.960914 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t7xb2"] Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961186 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961201 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961211 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-acl-logging" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961270 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-acl-logging" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961283 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961294 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961300 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="extract" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961305 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="extract" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961314 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="nbdb" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961319 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="nbdb" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961325 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961331 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961342 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961348 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961363 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-node" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961373 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-node" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961383 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="pull" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961389 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="pull" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961403 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="util" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961409 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="util" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961419 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961425 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961434 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kubecfg-setup" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961440 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kubecfg-setup" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961448 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="northd" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961453 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="northd" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961461 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="sbdb" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961467 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="sbdb" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961605 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="sbdb" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961617 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961627 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961635 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961643 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="northd" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961650 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961659 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961667 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="nbdb" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961678 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="extract" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961685 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-node" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961696 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-acl-logging" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961705 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961812 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961821 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961947 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.962095 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.962106 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.963969 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.002781 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-etc-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.002850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-node-log\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.002874 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovnkube-config\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.002893 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-systemd-units\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003080 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-systemd\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003159 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-kubelet\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003213 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-env-overrides\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003348 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovn-node-metrics-cert\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003387 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-slash\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003451 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003561 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-log-socket\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003661 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-cni-netd\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003696 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003818 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-ovn\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003854 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fflxj\" (UniqueName: \"kubernetes.io/projected/5eaf2e7f-83ab-438b-8de3-75886a97ada4-kube-api-access-fflxj\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003932 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-cni-bin\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003989 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovnkube-script-lib\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.004102 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-var-lib-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.004133 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.004164 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-run-netns\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105387 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-ovn\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105468 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fflxj\" (UniqueName: \"kubernetes.io/projected/5eaf2e7f-83ab-438b-8de3-75886a97ada4-kube-api-access-fflxj\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105516 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-cni-bin\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105550 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovnkube-script-lib\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105596 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-var-lib-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105625 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105655 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-run-netns\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105694 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-etc-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105811 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-node-log\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105877 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-ovn\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.106416 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-cni-bin\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107406 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovnkube-script-lib\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107483 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-var-lib-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107530 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107582 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-run-netns\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107626 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-etc-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105732 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-node-log\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107695 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovnkube-config\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107724 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-systemd-units\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107763 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-systemd\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107790 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-kubelet\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107816 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-env-overrides\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107858 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovn-node-metrics-cert\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107891 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-slash\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107917 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107962 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-log-socket\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108005 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-cni-netd\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108036 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108145 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108843 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovnkube-config\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108906 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-systemd-units\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108937 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-systemd\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108967 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-kubelet\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.109415 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-env-overrides\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.109967 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.109986 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-log-socket\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.110077 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-cni-netd\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.110106 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-slash\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.112788 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovn-node-metrics-cert\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.124522 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fflxj\" (UniqueName: \"kubernetes.io/projected/5eaf2e7f-83ab-438b-8de3-75886a97ada4-kube-api-access-fflxj\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.287608 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.322659 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovn-acl-logging/0.log" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323065 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovn-controller/0.log" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323437 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493" exitCode=0 Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323462 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4" exitCode=0 Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323518 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493"} Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323546 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4"} Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323556 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"9117799cf1251ac2e6249271f6bb1afef404c88ff5ec539853a26094bc4a4ad3"} Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323577 4985 scope.go:117] "RemoveContainer" containerID="e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323734 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.330567 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/2.log" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.348071 4985 scope.go:117] "RemoveContainer" containerID="10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.392486 4985 scope.go:117] "RemoveContainer" containerID="b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.429816 4985 scope.go:117] "RemoveContainer" containerID="4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.432406 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zd8w7"] Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.439843 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zd8w7"] Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.492384 4985 scope.go:117] "RemoveContainer" containerID="7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.544555 4985 scope.go:117] "RemoveContainer" containerID="6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.601029 4985 scope.go:117] "RemoveContainer" containerID="ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.625015 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5eaf2e7f_83ab_438b_8de3_75886a97ada4.slice/crio-conmon-40403e856521d655954c572d23f008f7a413527effb3b0ae52c77869649a3791.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5eaf2e7f_83ab_438b_8de3_75886a97ada4.slice/crio-40403e856521d655954c572d23f008f7a413527effb3b0ae52c77869649a3791.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.636507 4985 scope.go:117] "RemoveContainer" containerID="c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.704457 4985 scope.go:117] "RemoveContainer" containerID="da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.739569 4985 scope.go:117] "RemoveContainer" containerID="e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.741265 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154\": container with ID starting with e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154 not found: ID does not exist" containerID="e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.741300 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154"} err="failed to get container status \"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154\": rpc error: code = NotFound desc = could not find container \"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154\": container with ID starting with e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.741328 4985 scope.go:117] "RemoveContainer" containerID="10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.743576 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\": container with ID starting with 10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049 not found: ID does not exist" containerID="10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.743600 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049"} err="failed to get container status \"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\": rpc error: code = NotFound desc = could not find container \"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\": container with ID starting with 10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.743615 4985 scope.go:117] "RemoveContainer" containerID="b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.745841 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\": container with ID starting with b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290 not found: ID does not exist" containerID="b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.745886 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290"} err="failed to get container status \"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\": rpc error: code = NotFound desc = could not find container \"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\": container with ID starting with b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.745914 4985 scope.go:117] "RemoveContainer" containerID="4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.746172 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\": container with ID starting with 4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022 not found: ID does not exist" containerID="4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.746210 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022"} err="failed to get container status \"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\": rpc error: code = NotFound desc = could not find container \"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\": container with ID starting with 4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.746226 4985 scope.go:117] "RemoveContainer" containerID="7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.749457 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\": container with ID starting with 7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493 not found: ID does not exist" containerID="7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.749498 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493"} err="failed to get container status \"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\": rpc error: code = NotFound desc = could not find container \"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\": container with ID starting with 7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.749513 4985 scope.go:117] "RemoveContainer" containerID="6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.749731 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\": container with ID starting with 6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4 not found: ID does not exist" containerID="6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.749769 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4"} err="failed to get container status \"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\": rpc error: code = NotFound desc = could not find container \"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\": container with ID starting with 6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.749786 4985 scope.go:117] "RemoveContainer" containerID="ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.749983 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\": container with ID starting with ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07 not found: ID does not exist" containerID="ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750003 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07"} err="failed to get container status \"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\": rpc error: code = NotFound desc = could not find container \"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\": container with ID starting with ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750033 4985 scope.go:117] "RemoveContainer" containerID="c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.750237 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\": container with ID starting with c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2 not found: ID does not exist" containerID="c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750355 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2"} err="failed to get container status \"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\": rpc error: code = NotFound desc = could not find container \"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\": container with ID starting with c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750369 4985 scope.go:117] "RemoveContainer" containerID="da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.750591 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\": container with ID starting with da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13 not found: ID does not exist" containerID="da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750610 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13"} err="failed to get container status \"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\": rpc error: code = NotFound desc = could not find container \"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\": container with ID starting with da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750652 4985 scope.go:117] "RemoveContainer" containerID="e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750821 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154"} err="failed to get container status \"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154\": rpc error: code = NotFound desc = could not find container \"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154\": container with ID starting with e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750839 4985 scope.go:117] "RemoveContainer" containerID="10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751026 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049"} err="failed to get container status \"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\": rpc error: code = NotFound desc = could not find container \"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\": container with ID starting with 10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751066 4985 scope.go:117] "RemoveContainer" containerID="b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751227 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290"} err="failed to get container status \"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\": rpc error: code = NotFound desc = could not find container \"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\": container with ID starting with b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751244 4985 scope.go:117] "RemoveContainer" containerID="4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751445 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022"} err="failed to get container status \"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\": rpc error: code = NotFound desc = could not find container \"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\": container with ID starting with 4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751495 4985 scope.go:117] "RemoveContainer" containerID="7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751690 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493"} err="failed to get container status \"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\": rpc error: code = NotFound desc = could not find container \"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\": container with ID starting with 7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751736 4985 scope.go:117] "RemoveContainer" containerID="6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751930 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4"} err="failed to get container status \"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\": rpc error: code = NotFound desc = could not find container \"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\": container with ID starting with 6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751972 4985 scope.go:117] "RemoveContainer" containerID="ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.752139 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07"} err="failed to get container status \"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\": rpc error: code = NotFound desc = could not find container \"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\": container with ID starting with ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.752161 4985 scope.go:117] "RemoveContainer" containerID="c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.752376 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2"} err="failed to get container status \"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\": rpc error: code = NotFound desc = could not find container \"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\": container with ID starting with c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.752393 4985 scope.go:117] "RemoveContainer" containerID="da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.752571 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13"} err="failed to get container status \"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\": rpc error: code = NotFound desc = could not find container \"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\": container with ID starting with da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13 not found: ID does not exist" Jan 28 18:24:30 crc kubenswrapper[4985]: I0128 18:24:30.337526 4985 generic.go:334] "Generic (PLEG): container finished" podID="5eaf2e7f-83ab-438b-8de3-75886a97ada4" containerID="40403e856521d655954c572d23f008f7a413527effb3b0ae52c77869649a3791" exitCode=0 Jan 28 18:24:30 crc kubenswrapper[4985]: I0128 18:24:30.337569 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerDied","Data":"40403e856521d655954c572d23f008f7a413527effb3b0ae52c77869649a3791"} Jan 28 18:24:30 crc kubenswrapper[4985]: I0128 18:24:30.337592 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"b7175ec38ee5684e88d07daad8a37cb7e95b9291762bbeff20ca302d93347d51"} Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.273965 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" path="/var/lib/kubelet/pods/bd7b8cde-d2fe-4842-857e-545172f5bd12/volumes" Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.360131 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"677d53264845f1178736ce4c75b59139b9435a9d9962fc83fd5f67f7cb8c74e4"} Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.360491 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"1a021b7cb135439167793d3a9270e28bd03b752b3dfbea56473b20c8b53e64a2"} Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.360506 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"95d9f4be877a771d4082a16a854680569ae96249433bd2133eb0bf3ba433741d"} Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.360519 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"922936e9ef6c305256663e7c5e2628237c01472b317ba492282a9bb9fec0a09e"} Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.360530 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"c1fd07714381094ef88219d7d1ece4e146a19f50355bf88e062e6ee355789b5b"} Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.360541 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"c09c05a924359342e91a3cb914a3154fe8936ccd9528071be9bc8e0c570f5495"} Jan 28 18:24:33 crc kubenswrapper[4985]: I0128 18:24:33.891241 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-s9875"] Jan 28 18:24:33 crc kubenswrapper[4985]: I0128 18:24:33.892954 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:33 crc kubenswrapper[4985]: I0128 18:24:33.894641 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-496gd" Jan 28 18:24:33 crc kubenswrapper[4985]: I0128 18:24:33.895050 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 28 18:24:33 crc kubenswrapper[4985]: I0128 18:24:33.895727 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 28 18:24:33 crc kubenswrapper[4985]: I0128 18:24:33.992560 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmwvw\" (UniqueName: \"kubernetes.io/projected/74fbf9d6-ccb4-4d90-9db8-2d4613334d81-kube-api-access-tmwvw\") pod \"obo-prometheus-operator-68bc856cb9-s9875\" (UID: \"74fbf9d6-ccb4-4d90-9db8-2d4613334d81\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.010669 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb"] Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.011799 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.015001 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.015037 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-xcf75" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.025317 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n"] Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.026339 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.094212 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/23ef5df5-bfbe-4465-8e87-d69896bf70aa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb\" (UID: \"23ef5df5-bfbe-4465-8e87-d69896bf70aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.094324 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/23ef5df5-bfbe-4465-8e87-d69896bf70aa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb\" (UID: \"23ef5df5-bfbe-4465-8e87-d69896bf70aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.094410 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmwvw\" (UniqueName: \"kubernetes.io/projected/74fbf9d6-ccb4-4d90-9db8-2d4613334d81-kube-api-access-tmwvw\") pod \"obo-prometheus-operator-68bc856cb9-s9875\" (UID: \"74fbf9d6-ccb4-4d90-9db8-2d4613334d81\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.094449 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e192375e-5db5-46e4-922b-21b8bc5698ba-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n\" (UID: \"e192375e-5db5-46e4-922b-21b8bc5698ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.094492 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e192375e-5db5-46e4-922b-21b8bc5698ba-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n\" (UID: \"e192375e-5db5-46e4-922b-21b8bc5698ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.120652 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-nfhqj"] Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.128224 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmwvw\" (UniqueName: \"kubernetes.io/projected/74fbf9d6-ccb4-4d90-9db8-2d4613334d81-kube-api-access-tmwvw\") pod \"obo-prometheus-operator-68bc856cb9-s9875\" (UID: \"74fbf9d6-ccb4-4d90-9db8-2d4613334d81\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.140041 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.142332 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.142582 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-2fmlf" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.196531 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/23ef5df5-bfbe-4465-8e87-d69896bf70aa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb\" (UID: \"23ef5df5-bfbe-4465-8e87-d69896bf70aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.196591 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwxm6\" (UniqueName: \"kubernetes.io/projected/a23ac89d-75e4-4511-afaa-ef9d6205a672-kube-api-access-vwxm6\") pod \"observability-operator-59bdc8b94-nfhqj\" (UID: \"a23ac89d-75e4-4511-afaa-ef9d6205a672\") " pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.196634 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a23ac89d-75e4-4511-afaa-ef9d6205a672-observability-operator-tls\") pod \"observability-operator-59bdc8b94-nfhqj\" (UID: \"a23ac89d-75e4-4511-afaa-ef9d6205a672\") " pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.196725 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/23ef5df5-bfbe-4465-8e87-d69896bf70aa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb\" (UID: \"23ef5df5-bfbe-4465-8e87-d69896bf70aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.196824 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e192375e-5db5-46e4-922b-21b8bc5698ba-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n\" (UID: \"e192375e-5db5-46e4-922b-21b8bc5698ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.196881 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e192375e-5db5-46e4-922b-21b8bc5698ba-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n\" (UID: \"e192375e-5db5-46e4-922b-21b8bc5698ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.200700 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/23ef5df5-bfbe-4465-8e87-d69896bf70aa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb\" (UID: \"23ef5df5-bfbe-4465-8e87-d69896bf70aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.200780 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/23ef5df5-bfbe-4465-8e87-d69896bf70aa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb\" (UID: \"23ef5df5-bfbe-4465-8e87-d69896bf70aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.202843 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e192375e-5db5-46e4-922b-21b8bc5698ba-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n\" (UID: \"e192375e-5db5-46e4-922b-21b8bc5698ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.211986 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.213879 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e192375e-5db5-46e4-922b-21b8bc5698ba-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n\" (UID: \"e192375e-5db5-46e4-922b-21b8bc5698ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.249011 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(29faabad969d76e1bc86a7032b8f52d0bfaa8ecd6ae885d70b138808bd732c18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.249090 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(29faabad969d76e1bc86a7032b8f52d0bfaa8ecd6ae885d70b138808bd732c18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.249125 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(29faabad969d76e1bc86a7032b8f52d0bfaa8ecd6ae885d70b138808bd732c18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.249180 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-s9875_openshift-operators(74fbf9d6-ccb4-4d90-9db8-2d4613334d81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-s9875_openshift-operators(74fbf9d6-ccb4-4d90-9db8-2d4613334d81)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(29faabad969d76e1bc86a7032b8f52d0bfaa8ecd6ae885d70b138808bd732c18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" podUID="74fbf9d6-ccb4-4d90-9db8-2d4613334d81" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.299272 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwxm6\" (UniqueName: \"kubernetes.io/projected/a23ac89d-75e4-4511-afaa-ef9d6205a672-kube-api-access-vwxm6\") pod \"observability-operator-59bdc8b94-nfhqj\" (UID: \"a23ac89d-75e4-4511-afaa-ef9d6205a672\") " pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.299347 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a23ac89d-75e4-4511-afaa-ef9d6205a672-observability-operator-tls\") pod \"observability-operator-59bdc8b94-nfhqj\" (UID: \"a23ac89d-75e4-4511-afaa-ef9d6205a672\") " pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.308396 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a23ac89d-75e4-4511-afaa-ef9d6205a672-observability-operator-tls\") pod \"observability-operator-59bdc8b94-nfhqj\" (UID: \"a23ac89d-75e4-4511-afaa-ef9d6205a672\") " pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.314262 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-j7z4h"] Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.315106 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.317547 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-625jx" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.330875 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.334074 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwxm6\" (UniqueName: \"kubernetes.io/projected/a23ac89d-75e4-4511-afaa-ef9d6205a672-kube-api-access-vwxm6\") pod \"observability-operator-59bdc8b94-nfhqj\" (UID: \"a23ac89d-75e4-4511-afaa-ef9d6205a672\") " pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.346807 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.363059 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(208e09d05b8d14b5ecd6ae1f1eff9c4a121eb4be05af6654fb7b06e8385ea0c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.363178 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(208e09d05b8d14b5ecd6ae1f1eff9c4a121eb4be05af6654fb7b06e8385ea0c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.363205 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(208e09d05b8d14b5ecd6ae1f1eff9c4a121eb4be05af6654fb7b06e8385ea0c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.363292 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(208e09d05b8d14b5ecd6ae1f1eff9c4a121eb4be05af6654fb7b06e8385ea0c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" podUID="23ef5df5-bfbe-4465-8e87-d69896bf70aa" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.386781 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"1758414e768b7ec440bcc7b839d9210e2b1b2c9efc4ac671be293450005b4f3e"} Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.390832 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(a72ecb34afecfb553c70190416fdae983240fe461836bfd976c95203f59652a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.390882 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(a72ecb34afecfb553c70190416fdae983240fe461836bfd976c95203f59652a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.390908 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(a72ecb34afecfb553c70190416fdae983240fe461836bfd976c95203f59652a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.390972 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators(e192375e-5db5-46e4-922b-21b8bc5698ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators(e192375e-5db5-46e4-922b-21b8bc5698ba)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(a72ecb34afecfb553c70190416fdae983240fe461836bfd976c95203f59652a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" podUID="e192375e-5db5-46e4-922b-21b8bc5698ba" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.400888 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69m2l\" (UniqueName: \"kubernetes.io/projected/971845b8-805d-4b4a-a8fd-14f263f17695-kube-api-access-69m2l\") pod \"perses-operator-5bf474d74f-j7z4h\" (UID: \"971845b8-805d-4b4a-a8fd-14f263f17695\") " pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.400990 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/971845b8-805d-4b4a-a8fd-14f263f17695-openshift-service-ca\") pod \"perses-operator-5bf474d74f-j7z4h\" (UID: \"971845b8-805d-4b4a-a8fd-14f263f17695\") " pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.463186 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.489412 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9b0522f93b41c249bf97c577b9df67d08e489c2e0c55f5a5a5fdd1f981d5ab29): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.489500 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9b0522f93b41c249bf97c577b9df67d08e489c2e0c55f5a5a5fdd1f981d5ab29): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.489535 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9b0522f93b41c249bf97c577b9df67d08e489c2e0c55f5a5a5fdd1f981d5ab29): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.489592 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9b0522f93b41c249bf97c577b9df67d08e489c2e0c55f5a5a5fdd1f981d5ab29): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.502101 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/971845b8-805d-4b4a-a8fd-14f263f17695-openshift-service-ca\") pod \"perses-operator-5bf474d74f-j7z4h\" (UID: \"971845b8-805d-4b4a-a8fd-14f263f17695\") " pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.502192 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69m2l\" (UniqueName: \"kubernetes.io/projected/971845b8-805d-4b4a-a8fd-14f263f17695-kube-api-access-69m2l\") pod \"perses-operator-5bf474d74f-j7z4h\" (UID: \"971845b8-805d-4b4a-a8fd-14f263f17695\") " pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.504127 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/971845b8-805d-4b4a-a8fd-14f263f17695-openshift-service-ca\") pod \"perses-operator-5bf474d74f-j7z4h\" (UID: \"971845b8-805d-4b4a-a8fd-14f263f17695\") " pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.526227 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69m2l\" (UniqueName: \"kubernetes.io/projected/971845b8-805d-4b4a-a8fd-14f263f17695-kube-api-access-69m2l\") pod \"perses-operator-5bf474d74f-j7z4h\" (UID: \"971845b8-805d-4b4a-a8fd-14f263f17695\") " pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.630480 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.657633 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(96d9482064eef2c89d186774f6e3582ef0c84d0063bf78e7c74b2cce3005d96d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.657724 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(96d9482064eef2c89d186774f6e3582ef0c84d0063bf78e7c74b2cce3005d96d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.657756 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(96d9482064eef2c89d186774f6e3582ef0c84d0063bf78e7c74b2cce3005d96d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.657814 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-j7z4h_openshift-operators(971845b8-805d-4b4a-a8fd-14f263f17695)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-j7z4h_openshift-operators(971845b8-805d-4b4a-a8fd-14f263f17695)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(96d9482064eef2c89d186774f6e3582ef0c84d0063bf78e7c74b2cce3005d96d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.415628 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"6eb47f3ff933b2a42e76298fe1e2b19e90ff72f7c98741de60d3cf30a481c54f"} Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.416059 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.416074 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.460821 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.464901 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" podStartSLOduration=8.464879861 podStartE2EDuration="8.464879861s" podCreationTimestamp="2026-01-28 18:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:24:36.462126543 +0000 UTC m=+687.288689374" watchObservedRunningTime="2026-01-28 18:24:36.464879861 +0000 UTC m=+687.291442682" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.577119 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n"] Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.577278 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.577853 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.584716 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-s9875"] Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.584863 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.585402 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.591772 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb"] Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.591947 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.592478 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.596577 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-j7z4h"] Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.596728 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.597232 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.606852 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-nfhqj"] Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.606989 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.607672 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.640835 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(93eeb652b0048b5817f30a43ddfc31c2a9f63710993025b796134cd5ebee29f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.640930 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(93eeb652b0048b5817f30a43ddfc31c2a9f63710993025b796134cd5ebee29f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.640963 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(93eeb652b0048b5817f30a43ddfc31c2a9f63710993025b796134cd5ebee29f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.641023 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators(e192375e-5db5-46e4-922b-21b8bc5698ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators(e192375e-5db5-46e4-922b-21b8bc5698ba)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(93eeb652b0048b5817f30a43ddfc31c2a9f63710993025b796134cd5ebee29f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" podUID="e192375e-5db5-46e4-922b-21b8bc5698ba" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.656502 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e0d8bd566b6792d29d17f2969f8e0d616138b6f6b3042e6e30b08de9fc377ab9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.656592 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e0d8bd566b6792d29d17f2969f8e0d616138b6f6b3042e6e30b08de9fc377ab9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.656626 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e0d8bd566b6792d29d17f2969f8e0d616138b6f6b3042e6e30b08de9fc377ab9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.656695 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-s9875_openshift-operators(74fbf9d6-ccb4-4d90-9db8-2d4613334d81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-s9875_openshift-operators(74fbf9d6-ccb4-4d90-9db8-2d4613334d81)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e0d8bd566b6792d29d17f2969f8e0d616138b6f6b3042e6e30b08de9fc377ab9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" podUID="74fbf9d6-ccb4-4d90-9db8-2d4613334d81" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.662808 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(7b6e02a522756e55ef713e4083f235de23ac9f59cdd5fb64b1d6881b2c7fb62f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.662876 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(7b6e02a522756e55ef713e4083f235de23ac9f59cdd5fb64b1d6881b2c7fb62f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.662905 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(7b6e02a522756e55ef713e4083f235de23ac9f59cdd5fb64b1d6881b2c7fb62f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.662957 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(7b6e02a522756e55ef713e4083f235de23ac9f59cdd5fb64b1d6881b2c7fb62f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" podUID="23ef5df5-bfbe-4465-8e87-d69896bf70aa" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.673514 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(1bac1c1ab1c5a0cc011606031d335e11b6612bcdd7cf56720dceff8ff1c16c2b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.673592 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(1bac1c1ab1c5a0cc011606031d335e11b6612bcdd7cf56720dceff8ff1c16c2b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.673615 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(1bac1c1ab1c5a0cc011606031d335e11b6612bcdd7cf56720dceff8ff1c16c2b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.673676 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-j7z4h_openshift-operators(971845b8-805d-4b4a-a8fd-14f263f17695)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-j7z4h_openshift-operators(971845b8-805d-4b4a-a8fd-14f263f17695)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(1bac1c1ab1c5a0cc011606031d335e11b6612bcdd7cf56720dceff8ff1c16c2b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.689131 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(8dd30a144e1329e37cc6303e45e1967666f6b414a0da21e86ac70951b01895f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.689222 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(8dd30a144e1329e37cc6303e45e1967666f6b414a0da21e86ac70951b01895f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.689265 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(8dd30a144e1329e37cc6303e45e1967666f6b414a0da21e86ac70951b01895f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.689321 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(8dd30a144e1329e37cc6303e45e1967666f6b414a0da21e86ac70951b01895f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" Jan 28 18:24:37 crc kubenswrapper[4985]: I0128 18:24:37.421498 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:37 crc kubenswrapper[4985]: I0128 18:24:37.462973 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:44 crc kubenswrapper[4985]: I0128 18:24:44.264072 4985 scope.go:117] "RemoveContainer" containerID="95eb50bd0d67db39cc80a75d4b4c5fb2e77de46dc2c84556d599c22d07b3f535" Jan 28 18:24:44 crc kubenswrapper[4985]: E0128 18:24:44.264894 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-g2g4k_openshift-multus(14fdd73a-b8dd-42da-88b4-2ccb314c4f7a)\"" pod="openshift-multus/multus-g2g4k" podUID="14fdd73a-b8dd-42da-88b4-2ccb314c4f7a" Jan 28 18:24:48 crc kubenswrapper[4985]: I0128 18:24:48.265494 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:48 crc kubenswrapper[4985]: I0128 18:24:48.266448 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:48 crc kubenswrapper[4985]: I0128 18:24:48.266799 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:48 crc kubenswrapper[4985]: I0128 18:24:48.267058 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:48 crc kubenswrapper[4985]: I0128 18:24:48.267296 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:48 crc kubenswrapper[4985]: I0128 18:24:48.267545 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.317061 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(32957508a4349a8716a3b426572e91919ea6dad7bc003100cd1b52576f895b17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.317156 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(32957508a4349a8716a3b426572e91919ea6dad7bc003100cd1b52576f895b17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.317187 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(32957508a4349a8716a3b426572e91919ea6dad7bc003100cd1b52576f895b17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.317265 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-j7z4h_openshift-operators(971845b8-805d-4b4a-a8fd-14f263f17695)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-j7z4h_openshift-operators(971845b8-805d-4b4a-a8fd-14f263f17695)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(32957508a4349a8716a3b426572e91919ea6dad7bc003100cd1b52576f895b17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.326822 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(baf9d00a47d24313a5e6e14f1fa1f183055632ef953c50227b2826fa94cd3259): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.326901 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(baf9d00a47d24313a5e6e14f1fa1f183055632ef953c50227b2826fa94cd3259): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.326931 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(baf9d00a47d24313a5e6e14f1fa1f183055632ef953c50227b2826fa94cd3259): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.326985 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(baf9d00a47d24313a5e6e14f1fa1f183055632ef953c50227b2826fa94cd3259): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.334528 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(4b9061de8cbd45b11b8cf52b9c5668829e3f5e47aae2c86d96cfd48ccb2ef1e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.334611 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(4b9061de8cbd45b11b8cf52b9c5668829e3f5e47aae2c86d96cfd48ccb2ef1e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.334630 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(4b9061de8cbd45b11b8cf52b9c5668829e3f5e47aae2c86d96cfd48ccb2ef1e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.334678 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(4b9061de8cbd45b11b8cf52b9c5668829e3f5e47aae2c86d96cfd48ccb2ef1e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" podUID="23ef5df5-bfbe-4465-8e87-d69896bf70aa" Jan 28 18:24:49 crc kubenswrapper[4985]: I0128 18:24:49.264004 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:49 crc kubenswrapper[4985]: I0128 18:24:49.264014 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:49 crc kubenswrapper[4985]: I0128 18:24:49.264964 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:49 crc kubenswrapper[4985]: I0128 18:24:49.265235 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.309378 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e83055780f4bfb5c9cdadf9ace8447600e2e32d94acfeef5e58bcc7143e0d175): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.309460 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e83055780f4bfb5c9cdadf9ace8447600e2e32d94acfeef5e58bcc7143e0d175): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.309484 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e83055780f4bfb5c9cdadf9ace8447600e2e32d94acfeef5e58bcc7143e0d175): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.309532 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-s9875_openshift-operators(74fbf9d6-ccb4-4d90-9db8-2d4613334d81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-s9875_openshift-operators(74fbf9d6-ccb4-4d90-9db8-2d4613334d81)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e83055780f4bfb5c9cdadf9ace8447600e2e32d94acfeef5e58bcc7143e0d175): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" podUID="74fbf9d6-ccb4-4d90-9db8-2d4613334d81" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.320728 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(fe682c1137c94f49d1de3af096b59ea625a95faef4725635a32ea2943ad3f55a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.320804 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(fe682c1137c94f49d1de3af096b59ea625a95faef4725635a32ea2943ad3f55a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.320827 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(fe682c1137c94f49d1de3af096b59ea625a95faef4725635a32ea2943ad3f55a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.320892 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators(e192375e-5db5-46e4-922b-21b8bc5698ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators(e192375e-5db5-46e4-922b-21b8bc5698ba)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(fe682c1137c94f49d1de3af096b59ea625a95faef4725635a32ea2943ad3f55a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" podUID="e192375e-5db5-46e4-922b-21b8bc5698ba" Jan 28 18:24:59 crc kubenswrapper[4985]: I0128 18:24:59.263171 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:59 crc kubenswrapper[4985]: I0128 18:24:59.264195 4985 scope.go:117] "RemoveContainer" containerID="95eb50bd0d67db39cc80a75d4b4c5fb2e77de46dc2c84556d599c22d07b3f535" Jan 28 18:24:59 crc kubenswrapper[4985]: I0128 18:24:59.264315 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:59 crc kubenswrapper[4985]: E0128 18:24:59.299730 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(8c4b00ae33fc1b763a4b5ef80dc9bae0cf6a6bba7db48d666e515829c5e36743): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:59 crc kubenswrapper[4985]: E0128 18:24:59.300118 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(8c4b00ae33fc1b763a4b5ef80dc9bae0cf6a6bba7db48d666e515829c5e36743): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:59 crc kubenswrapper[4985]: E0128 18:24:59.300141 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(8c4b00ae33fc1b763a4b5ef80dc9bae0cf6a6bba7db48d666e515829c5e36743): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:59 crc kubenswrapper[4985]: E0128 18:24:59.300198 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(8c4b00ae33fc1b763a4b5ef80dc9bae0cf6a6bba7db48d666e515829c5e36743): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" podUID="23ef5df5-bfbe-4465-8e87-d69896bf70aa" Jan 28 18:24:59 crc kubenswrapper[4985]: I0128 18:24:59.333389 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:59 crc kubenswrapper[4985]: I0128 18:24:59.580282 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/2.log" Jan 28 18:24:59 crc kubenswrapper[4985]: I0128 18:24:59.580331 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerStarted","Data":"2fa855b376b5c1a8660d9a5849aee571e5d3906bf3e0683c102e56cd4407bf6a"} Jan 28 18:25:00 crc kubenswrapper[4985]: I0128 18:25:00.263640 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:00 crc kubenswrapper[4985]: I0128 18:25:00.264496 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:00 crc kubenswrapper[4985]: E0128 18:25:00.299535 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9701ed88a259d2af5b5a43e02af66ac3bbd05f98aa0234f03af5407a23824f45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:25:00 crc kubenswrapper[4985]: E0128 18:25:00.299598 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9701ed88a259d2af5b5a43e02af66ac3bbd05f98aa0234f03af5407a23824f45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:00 crc kubenswrapper[4985]: E0128 18:25:00.299624 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9701ed88a259d2af5b5a43e02af66ac3bbd05f98aa0234f03af5407a23824f45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:00 crc kubenswrapper[4985]: E0128 18:25:00.299675 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9701ed88a259d2af5b5a43e02af66ac3bbd05f98aa0234f03af5407a23824f45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" Jan 28 18:25:01 crc kubenswrapper[4985]: I0128 18:25:01.263701 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:25:01 crc kubenswrapper[4985]: I0128 18:25:01.263895 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:25:01 crc kubenswrapper[4985]: I0128 18:25:01.267091 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:25:01 crc kubenswrapper[4985]: I0128 18:25:01.267239 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:25:01 crc kubenswrapper[4985]: W0128 18:25:01.693156 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74fbf9d6_ccb4_4d90_9db8_2d4613334d81.slice/crio-add82309d50d78a30022b16e5a3839e0440e8dabfb9fffdeb5835a3f9c201353 WatchSource:0}: Error finding container add82309d50d78a30022b16e5a3839e0440e8dabfb9fffdeb5835a3f9c201353: Status 404 returned error can't find the container with id add82309d50d78a30022b16e5a3839e0440e8dabfb9fffdeb5835a3f9c201353 Jan 28 18:25:01 crc kubenswrapper[4985]: I0128 18:25:01.693630 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-s9875"] Jan 28 18:25:01 crc kubenswrapper[4985]: I0128 18:25:01.732242 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n"] Jan 28 18:25:01 crc kubenswrapper[4985]: W0128 18:25:01.737425 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode192375e_5db5_46e4_922b_21b8bc5698ba.slice/crio-30d4ef74c8ac24a8cb23ef26e33466ad601d4cca6b68ee6d57910df3583be525 WatchSource:0}: Error finding container 30d4ef74c8ac24a8cb23ef26e33466ad601d4cca6b68ee6d57910df3583be525: Status 404 returned error can't find the container with id 30d4ef74c8ac24a8cb23ef26e33466ad601d4cca6b68ee6d57910df3583be525 Jan 28 18:25:02 crc kubenswrapper[4985]: I0128 18:25:02.263449 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:25:02 crc kubenswrapper[4985]: I0128 18:25:02.264052 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:25:02 crc kubenswrapper[4985]: I0128 18:25:02.517237 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-j7z4h"] Jan 28 18:25:02 crc kubenswrapper[4985]: W0128 18:25:02.525766 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod971845b8_805d_4b4a_a8fd_14f263f17695.slice/crio-07407725e386e35b6df2f030849dc111c0520473845e5f97965a659a2ca7d564 WatchSource:0}: Error finding container 07407725e386e35b6df2f030849dc111c0520473845e5f97965a659a2ca7d564: Status 404 returned error can't find the container with id 07407725e386e35b6df2f030849dc111c0520473845e5f97965a659a2ca7d564 Jan 28 18:25:02 crc kubenswrapper[4985]: I0128 18:25:02.598198 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" event={"ID":"74fbf9d6-ccb4-4d90-9db8-2d4613334d81","Type":"ContainerStarted","Data":"add82309d50d78a30022b16e5a3839e0440e8dabfb9fffdeb5835a3f9c201353"} Jan 28 18:25:02 crc kubenswrapper[4985]: I0128 18:25:02.600793 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" event={"ID":"e192375e-5db5-46e4-922b-21b8bc5698ba","Type":"ContainerStarted","Data":"30d4ef74c8ac24a8cb23ef26e33466ad601d4cca6b68ee6d57910df3583be525"} Jan 28 18:25:02 crc kubenswrapper[4985]: I0128 18:25:02.605492 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" event={"ID":"971845b8-805d-4b4a-a8fd-14f263f17695","Type":"ContainerStarted","Data":"07407725e386e35b6df2f030849dc111c0520473845e5f97965a659a2ca7d564"} Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.650459 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" event={"ID":"971845b8-805d-4b4a-a8fd-14f263f17695","Type":"ContainerStarted","Data":"7c5ad487890dc7f8cf939d3bf62e5a7d4cfbe598079616ba846dec6e2e0d74d4"} Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.651223 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.653017 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" event={"ID":"74fbf9d6-ccb4-4d90-9db8-2d4613334d81","Type":"ContainerStarted","Data":"6970029b0a83996e485f6e97e90fa6a4a4dc35f84627861d74e3045341f5e7c8"} Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.656017 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" event={"ID":"e192375e-5db5-46e4-922b-21b8bc5698ba","Type":"ContainerStarted","Data":"6ab744b3faa2dcd6a5678b4286389247407f71b5138248269e9852af1dd3926d"} Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.683948 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podStartSLOduration=29.075409269 podStartE2EDuration="34.683928119s" podCreationTimestamp="2026-01-28 18:24:34 +0000 UTC" firstStartedPulling="2026-01-28 18:25:02.528659886 +0000 UTC m=+713.355222707" lastFinishedPulling="2026-01-28 18:25:08.137178736 +0000 UTC m=+718.963741557" observedRunningTime="2026-01-28 18:25:08.679420072 +0000 UTC m=+719.505982893" watchObservedRunningTime="2026-01-28 18:25:08.683928119 +0000 UTC m=+719.510490940" Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.702792 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" podStartSLOduration=28.304722169 podStartE2EDuration="34.702773581s" podCreationTimestamp="2026-01-28 18:24:34 +0000 UTC" firstStartedPulling="2026-01-28 18:25:01.739539235 +0000 UTC m=+712.566102056" lastFinishedPulling="2026-01-28 18:25:08.137590647 +0000 UTC m=+718.964153468" observedRunningTime="2026-01-28 18:25:08.698173961 +0000 UTC m=+719.524736802" watchObservedRunningTime="2026-01-28 18:25:08.702773581 +0000 UTC m=+719.529336402" Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.725003 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" podStartSLOduration=29.302884076 podStartE2EDuration="35.724985327s" podCreationTimestamp="2026-01-28 18:24:33 +0000 UTC" firstStartedPulling="2026-01-28 18:25:01.696087729 +0000 UTC m=+712.522650550" lastFinishedPulling="2026-01-28 18:25:08.11818898 +0000 UTC m=+718.944751801" observedRunningTime="2026-01-28 18:25:08.720400338 +0000 UTC m=+719.546963169" watchObservedRunningTime="2026-01-28 18:25:08.724985327 +0000 UTC m=+719.551548148" Jan 28 18:25:14 crc kubenswrapper[4985]: I0128 18:25:14.272208 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:14 crc kubenswrapper[4985]: I0128 18:25:14.273863 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:14 crc kubenswrapper[4985]: I0128 18:25:14.634203 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:25:14 crc kubenswrapper[4985]: I0128 18:25:14.797900 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-nfhqj"] Jan 28 18:25:14 crc kubenswrapper[4985]: W0128 18:25:14.818446 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda23ac89d_75e4_4511_afaa_ef9d6205a672.slice/crio-f558dc2e9ddb82cd5fc588a21b26ed4ae91ab8f2b135f922d2095a11ecd2c689 WatchSource:0}: Error finding container f558dc2e9ddb82cd5fc588a21b26ed4ae91ab8f2b135f922d2095a11ecd2c689: Status 404 returned error can't find the container with id f558dc2e9ddb82cd5fc588a21b26ed4ae91ab8f2b135f922d2095a11ecd2c689 Jan 28 18:25:15 crc kubenswrapper[4985]: I0128 18:25:15.263959 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:25:15 crc kubenswrapper[4985]: I0128 18:25:15.264890 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:25:15 crc kubenswrapper[4985]: I0128 18:25:15.531525 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb"] Jan 28 18:25:15 crc kubenswrapper[4985]: W0128 18:25:15.541855 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23ef5df5_bfbe_4465_8e87_d69896bf70aa.slice/crio-bd0d94c9b1401faa512cfae652ff118614958312c37dcfd0ffca0410295b4b63 WatchSource:0}: Error finding container bd0d94c9b1401faa512cfae652ff118614958312c37dcfd0ffca0410295b4b63: Status 404 returned error can't find the container with id bd0d94c9b1401faa512cfae652ff118614958312c37dcfd0ffca0410295b4b63 Jan 28 18:25:15 crc kubenswrapper[4985]: I0128 18:25:15.699368 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" event={"ID":"23ef5df5-bfbe-4465-8e87-d69896bf70aa","Type":"ContainerStarted","Data":"bd0d94c9b1401faa512cfae652ff118614958312c37dcfd0ffca0410295b4b63"} Jan 28 18:25:15 crc kubenswrapper[4985]: I0128 18:25:15.700644 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" event={"ID":"a23ac89d-75e4-4511-afaa-ef9d6205a672","Type":"ContainerStarted","Data":"f558dc2e9ddb82cd5fc588a21b26ed4ae91ab8f2b135f922d2095a11ecd2c689"} Jan 28 18:25:16 crc kubenswrapper[4985]: I0128 18:25:16.710312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" event={"ID":"23ef5df5-bfbe-4465-8e87-d69896bf70aa","Type":"ContainerStarted","Data":"406e4cb8be88297103d4ce975fe592879d793a5f6960baaa20428a386b377277"} Jan 28 18:25:16 crc kubenswrapper[4985]: I0128 18:25:16.735344 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" podStartSLOduration=43.735326682 podStartE2EDuration="43.735326682s" podCreationTimestamp="2026-01-28 18:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:25:16.730389812 +0000 UTC m=+727.556952643" watchObservedRunningTime="2026-01-28 18:25:16.735326682 +0000 UTC m=+727.561889503" Jan 28 18:25:19 crc kubenswrapper[4985]: I0128 18:25:19.731546 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" event={"ID":"a23ac89d-75e4-4511-afaa-ef9d6205a672","Type":"ContainerStarted","Data":"22bb6e2fff06e8c5d79d9d6c748a0ba6b6268071593344e6ef0465f43decebdd"} Jan 28 18:25:19 crc kubenswrapper[4985]: I0128 18:25:19.731925 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:19 crc kubenswrapper[4985]: I0128 18:25:19.732796 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" start-of-body= Jan 28 18:25:19 crc kubenswrapper[4985]: I0128 18:25:19.732860 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" Jan 28 18:25:19 crc kubenswrapper[4985]: I0128 18:25:19.762888 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podStartSLOduration=41.040507792 podStartE2EDuration="45.762866675s" podCreationTimestamp="2026-01-28 18:24:34 +0000 UTC" firstStartedPulling="2026-01-28 18:25:14.823215593 +0000 UTC m=+725.649778414" lastFinishedPulling="2026-01-28 18:25:19.545574466 +0000 UTC m=+730.372137297" observedRunningTime="2026-01-28 18:25:19.758051889 +0000 UTC m=+730.584614710" watchObservedRunningTime="2026-01-28 18:25:19.762866675 +0000 UTC m=+730.589429496" Jan 28 18:25:20 crc kubenswrapper[4985]: I0128 18:25:20.742200 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.752539 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj"] Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.753889 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.755525 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.756319 4985 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5vjds" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.756432 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.764187 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-dzhtm"] Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.765058 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-dzhtm" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.767882 4985 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-rz7bt" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.773405 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj"] Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.791480 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mwrk6"] Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.792349 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.794450 4985 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-h7sp5" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.805156 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-dzhtm"] Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.812685 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mwrk6"] Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.823597 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh7sq\" (UniqueName: \"kubernetes.io/projected/26777afd-4d9f-4ebb-b8ed-0be018fa5a17-kube-api-access-hh7sq\") pod \"cert-manager-webhook-687f57d79b-mwrk6\" (UID: \"26777afd-4d9f-4ebb-b8ed-0be018fa5a17\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.823663 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5fpp\" (UniqueName: \"kubernetes.io/projected/aa962965-4b70-40f4-8400-b7ff2ec182e9-kube-api-access-w5fpp\") pod \"cert-manager-cainjector-cf98fcc89-bcvwj\" (UID: \"aa962965-4b70-40f4-8400-b7ff2ec182e9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.823691 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvthm\" (UniqueName: \"kubernetes.io/projected/4f9db9b6-ec43-4789-9efd-f2d4831c67e8-kube-api-access-bvthm\") pod \"cert-manager-858654f9db-dzhtm\" (UID: \"4f9db9b6-ec43-4789-9efd-f2d4831c67e8\") " pod="cert-manager/cert-manager-858654f9db-dzhtm" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.924938 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh7sq\" (UniqueName: \"kubernetes.io/projected/26777afd-4d9f-4ebb-b8ed-0be018fa5a17-kube-api-access-hh7sq\") pod \"cert-manager-webhook-687f57d79b-mwrk6\" (UID: \"26777afd-4d9f-4ebb-b8ed-0be018fa5a17\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.925292 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5fpp\" (UniqueName: \"kubernetes.io/projected/aa962965-4b70-40f4-8400-b7ff2ec182e9-kube-api-access-w5fpp\") pod \"cert-manager-cainjector-cf98fcc89-bcvwj\" (UID: \"aa962965-4b70-40f4-8400-b7ff2ec182e9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.925324 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvthm\" (UniqueName: \"kubernetes.io/projected/4f9db9b6-ec43-4789-9efd-f2d4831c67e8-kube-api-access-bvthm\") pod \"cert-manager-858654f9db-dzhtm\" (UID: \"4f9db9b6-ec43-4789-9efd-f2d4831c67e8\") " pod="cert-manager/cert-manager-858654f9db-dzhtm" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.943239 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvthm\" (UniqueName: \"kubernetes.io/projected/4f9db9b6-ec43-4789-9efd-f2d4831c67e8-kube-api-access-bvthm\") pod \"cert-manager-858654f9db-dzhtm\" (UID: \"4f9db9b6-ec43-4789-9efd-f2d4831c67e8\") " pod="cert-manager/cert-manager-858654f9db-dzhtm" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.948652 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5fpp\" (UniqueName: \"kubernetes.io/projected/aa962965-4b70-40f4-8400-b7ff2ec182e9-kube-api-access-w5fpp\") pod \"cert-manager-cainjector-cf98fcc89-bcvwj\" (UID: \"aa962965-4b70-40f4-8400-b7ff2ec182e9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.952086 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh7sq\" (UniqueName: \"kubernetes.io/projected/26777afd-4d9f-4ebb-b8ed-0be018fa5a17-kube-api-access-hh7sq\") pod \"cert-manager-webhook-687f57d79b-mwrk6\" (UID: \"26777afd-4d9f-4ebb-b8ed-0be018fa5a17\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.071900 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.079948 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-dzhtm" Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.107294 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.315444 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-dzhtm"] Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.593896 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj"] Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.600114 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mwrk6"] Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.824826 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" event={"ID":"26777afd-4d9f-4ebb-b8ed-0be018fa5a17","Type":"ContainerStarted","Data":"bfc419325b88b224232769b53268124515c8a3deadb7bd3dd62760b7baa1bc3a"} Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.825959 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" event={"ID":"aa962965-4b70-40f4-8400-b7ff2ec182e9","Type":"ContainerStarted","Data":"120c9843c75cf09029347e11e4e79ad5ca84e673294a12475d6627389a1b60c1"} Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.827084 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-dzhtm" event={"ID":"4f9db9b6-ec43-4789-9efd-f2d4831c67e8","Type":"ContainerStarted","Data":"6d2900cc8d8154d9389303f37c292e434e83acf2dca78c8e9012754b8db7f450"} Jan 28 18:25:37 crc kubenswrapper[4985]: I0128 18:25:37.868622 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-dzhtm" event={"ID":"4f9db9b6-ec43-4789-9efd-f2d4831c67e8","Type":"ContainerStarted","Data":"db09f7747f41e7c5012f23ee3ad3a5e9ac0c27fae2a1dd084ad0d5f9ecde13be"} Jan 28 18:25:37 crc kubenswrapper[4985]: I0128 18:25:37.869922 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" event={"ID":"26777afd-4d9f-4ebb-b8ed-0be018fa5a17","Type":"ContainerStarted","Data":"efcdb5995ad8535fb26c939596ae0288fe4108bc695625292cdb108a91bd2093"} Jan 28 18:25:37 crc kubenswrapper[4985]: I0128 18:25:37.870094 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:37 crc kubenswrapper[4985]: I0128 18:25:37.886812 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-dzhtm" podStartSLOduration=2.867801993 podStartE2EDuration="7.886793623s" podCreationTimestamp="2026-01-28 18:25:30 +0000 UTC" firstStartedPulling="2026-01-28 18:25:31.326328259 +0000 UTC m=+742.152891080" lastFinishedPulling="2026-01-28 18:25:36.345319889 +0000 UTC m=+747.171882710" observedRunningTime="2026-01-28 18:25:37.884142318 +0000 UTC m=+748.710705139" watchObservedRunningTime="2026-01-28 18:25:37.886793623 +0000 UTC m=+748.713356454" Jan 28 18:25:37 crc kubenswrapper[4985]: I0128 18:25:37.905619 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podStartSLOduration=3.129742732 podStartE2EDuration="7.905594933s" podCreationTimestamp="2026-01-28 18:25:30 +0000 UTC" firstStartedPulling="2026-01-28 18:25:31.608042266 +0000 UTC m=+742.434605087" lastFinishedPulling="2026-01-28 18:25:36.383894467 +0000 UTC m=+747.210457288" observedRunningTime="2026-01-28 18:25:37.902887357 +0000 UTC m=+748.729450188" watchObservedRunningTime="2026-01-28 18:25:37.905594933 +0000 UTC m=+748.732157754" Jan 28 18:25:38 crc kubenswrapper[4985]: I0128 18:25:38.877995 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" event={"ID":"aa962965-4b70-40f4-8400-b7ff2ec182e9","Type":"ContainerStarted","Data":"b87ebcf07463fd8c12859cde5e70b6fb80a7592a6f699d9b3da5c0069d2af80a"} Jan 28 18:25:38 crc kubenswrapper[4985]: I0128 18:25:38.895159 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" podStartSLOduration=2.372690296 podStartE2EDuration="8.895136238s" podCreationTimestamp="2026-01-28 18:25:30 +0000 UTC" firstStartedPulling="2026-01-28 18:25:31.591144429 +0000 UTC m=+742.417707240" lastFinishedPulling="2026-01-28 18:25:38.113590361 +0000 UTC m=+748.940153182" observedRunningTime="2026-01-28 18:25:38.89310505 +0000 UTC m=+749.719667881" watchObservedRunningTime="2026-01-28 18:25:38.895136238 +0000 UTC m=+749.721699059" Jan 28 18:25:46 crc kubenswrapper[4985]: I0128 18:25:46.110283 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:54 crc kubenswrapper[4985]: I0128 18:25:54.278599 4985 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 18:26:11 crc kubenswrapper[4985]: I0128 18:26:11.186792 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:26:11 crc kubenswrapper[4985]: I0128 18:26:11.187549 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.126451 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds"] Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.128991 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.131818 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.137951 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds"] Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.226572 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.226780 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94cgm\" (UniqueName: \"kubernetes.io/projected/a2f76b8f-1fff-44e6-931b-d35852c1ab04-kube-api-access-94cgm\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.226935 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.328515 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.328674 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94cgm\" (UniqueName: \"kubernetes.io/projected/a2f76b8f-1fff-44e6-931b-d35852c1ab04-kube-api-access-94cgm\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.329060 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.329688 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.330276 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.355375 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94cgm\" (UniqueName: \"kubernetes.io/projected/a2f76b8f-1fff-44e6-931b-d35852c1ab04-kube-api-access-94cgm\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.447355 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.526980 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95"] Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.528756 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.537169 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95"] Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.646946 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.647328 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.647372 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4w5\" (UniqueName: \"kubernetes.io/projected/b691bd15-43f8-4823-917b-7c27b8ca4ba6-kube-api-access-9w4w5\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.749478 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.749536 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.749577 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4w5\" (UniqueName: \"kubernetes.io/projected/b691bd15-43f8-4823-917b-7c27b8ca4ba6-kube-api-access-9w4w5\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.750654 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.751081 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.767578 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4w5\" (UniqueName: \"kubernetes.io/projected/b691bd15-43f8-4823-917b-7c27b8ca4ba6-kube-api-access-9w4w5\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.879528 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.982574 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds"] Jan 28 18:26:15 crc kubenswrapper[4985]: W0128 18:26:15.987468 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2f76b8f_1fff_44e6_931b_d35852c1ab04.slice/crio-7aa0e2182016394b47444a51b40eb5073bda21f911c0c534cca66600027c5597 WatchSource:0}: Error finding container 7aa0e2182016394b47444a51b40eb5073bda21f911c0c534cca66600027c5597: Status 404 returned error can't find the container with id 7aa0e2182016394b47444a51b40eb5073bda21f911c0c534cca66600027c5597 Jan 28 18:26:16 crc kubenswrapper[4985]: I0128 18:26:16.119476 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95"] Jan 28 18:26:16 crc kubenswrapper[4985]: I0128 18:26:16.150830 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" event={"ID":"b691bd15-43f8-4823-917b-7c27b8ca4ba6","Type":"ContainerStarted","Data":"d14c9322216608ff3fd9b4c5f70c9086a5972c70a87762641033ea553f1b5def"} Jan 28 18:26:16 crc kubenswrapper[4985]: I0128 18:26:16.153788 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" event={"ID":"a2f76b8f-1fff-44e6-931b-d35852c1ab04","Type":"ContainerStarted","Data":"894e15ec7d9220f942b14acfcad7685a2367b1b0f812f2e821ac326391a596a4"} Jan 28 18:26:16 crc kubenswrapper[4985]: I0128 18:26:16.153836 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" event={"ID":"a2f76b8f-1fff-44e6-931b-d35852c1ab04","Type":"ContainerStarted","Data":"7aa0e2182016394b47444a51b40eb5073bda21f911c0c534cca66600027c5597"} Jan 28 18:26:17 crc kubenswrapper[4985]: I0128 18:26:17.160645 4985 generic.go:334] "Generic (PLEG): container finished" podID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerID="c4ac76dea0f68a800666e4d35f648b0040acc4cb01a7cb6535b7cc18059fb1e3" exitCode=0 Jan 28 18:26:17 crc kubenswrapper[4985]: I0128 18:26:17.160745 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" event={"ID":"b691bd15-43f8-4823-917b-7c27b8ca4ba6","Type":"ContainerDied","Data":"c4ac76dea0f68a800666e4d35f648b0040acc4cb01a7cb6535b7cc18059fb1e3"} Jan 28 18:26:17 crc kubenswrapper[4985]: I0128 18:26:17.174148 4985 generic.go:334] "Generic (PLEG): container finished" podID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerID="894e15ec7d9220f942b14acfcad7685a2367b1b0f812f2e821ac326391a596a4" exitCode=0 Jan 28 18:26:17 crc kubenswrapper[4985]: I0128 18:26:17.174368 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" event={"ID":"a2f76b8f-1fff-44e6-931b-d35852c1ab04","Type":"ContainerDied","Data":"894e15ec7d9220f942b14acfcad7685a2367b1b0f812f2e821ac326391a596a4"} Jan 28 18:26:18 crc kubenswrapper[4985]: I0128 18:26:18.868854 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4dzwh"] Jan 28 18:26:18 crc kubenswrapper[4985]: I0128 18:26:18.870971 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:18 crc kubenswrapper[4985]: I0128 18:26:18.881962 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dzwh"] Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.005530 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4b44\" (UniqueName: \"kubernetes.io/projected/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-kube-api-access-v4b44\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.005649 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-utilities\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.005695 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-catalog-content\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.107581 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4b44\" (UniqueName: \"kubernetes.io/projected/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-kube-api-access-v4b44\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.107641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-utilities\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.107665 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-catalog-content\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.108396 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-catalog-content\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.108609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-utilities\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.152242 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4b44\" (UniqueName: \"kubernetes.io/projected/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-kube-api-access-v4b44\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.188705 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.190374 4985 generic.go:334] "Generic (PLEG): container finished" podID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerID="5e02319af2540360ecf8371ada1fc857a03d8e9891ff4ad09fbe5e3ee5955e14" exitCode=0 Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.190457 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" event={"ID":"b691bd15-43f8-4823-917b-7c27b8ca4ba6","Type":"ContainerDied","Data":"5e02319af2540360ecf8371ada1fc857a03d8e9891ff4ad09fbe5e3ee5955e14"} Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.193982 4985 generic.go:334] "Generic (PLEG): container finished" podID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerID="a13b49cc7e5a6c2a85243136ccb7cd9085a298499675dae80e5751a420c59978" exitCode=0 Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.194055 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" event={"ID":"a2f76b8f-1fff-44e6-931b-d35852c1ab04","Type":"ContainerDied","Data":"a13b49cc7e5a6c2a85243136ccb7cd9085a298499675dae80e5751a420c59978"} Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.435390 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dzwh"] Jan 28 18:26:19 crc kubenswrapper[4985]: W0128 18:26:19.443651 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d87bdf0_7212_4ee9_a727_c4c4dfa0a6f9.slice/crio-edef8aac6c8d1e61396f10082b442134209abcac77fca9ab8eefd215fc05cb14 WatchSource:0}: Error finding container edef8aac6c8d1e61396f10082b442134209abcac77fca9ab8eefd215fc05cb14: Status 404 returned error can't find the container with id edef8aac6c8d1e61396f10082b442134209abcac77fca9ab8eefd215fc05cb14 Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.202615 4985 generic.go:334] "Generic (PLEG): container finished" podID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerID="e8d028e6fa502a4926094f90447dd5b0dfaa5b2776af57350b61ce63ec91efa8" exitCode=0 Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.202696 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerDied","Data":"e8d028e6fa502a4926094f90447dd5b0dfaa5b2776af57350b61ce63ec91efa8"} Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.203030 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerStarted","Data":"edef8aac6c8d1e61396f10082b442134209abcac77fca9ab8eefd215fc05cb14"} Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.209239 4985 generic.go:334] "Generic (PLEG): container finished" podID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerID="b0fcaf2aa9fc6cb35b7aa0ba340b5c41ae600a87a1bae320b336b665aa63865d" exitCode=0 Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.209369 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" event={"ID":"b691bd15-43f8-4823-917b-7c27b8ca4ba6","Type":"ContainerDied","Data":"b0fcaf2aa9fc6cb35b7aa0ba340b5c41ae600a87a1bae320b336b665aa63865d"} Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.217831 4985 generic.go:334] "Generic (PLEG): container finished" podID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerID="166206872b0c4d4884e6fc515dd80ff9dfc15537397aa40de4b4a7ad7d6f4489" exitCode=0 Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.217864 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" event={"ID":"a2f76b8f-1fff-44e6-931b-d35852c1ab04","Type":"ContainerDied","Data":"166206872b0c4d4884e6fc515dd80ff9dfc15537397aa40de4b4a7ad7d6f4489"} Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.226437 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerStarted","Data":"c0101ba127274bf28c8cc50d2966b9e93977f192a37fbe59aa75129ed11ee8f9"} Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.601759 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.607997 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.646779 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-bundle\") pod \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.646890 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94cgm\" (UniqueName: \"kubernetes.io/projected/a2f76b8f-1fff-44e6-931b-d35852c1ab04-kube-api-access-94cgm\") pod \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.646924 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-util\") pod \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.648303 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-bundle" (OuterVolumeSpecName: "bundle") pod "a2f76b8f-1fff-44e6-931b-d35852c1ab04" (UID: "a2f76b8f-1fff-44e6-931b-d35852c1ab04"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.655019 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f76b8f-1fff-44e6-931b-d35852c1ab04-kube-api-access-94cgm" (OuterVolumeSpecName: "kube-api-access-94cgm") pod "a2f76b8f-1fff-44e6-931b-d35852c1ab04" (UID: "a2f76b8f-1fff-44e6-931b-d35852c1ab04"). InnerVolumeSpecName "kube-api-access-94cgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.748683 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w4w5\" (UniqueName: \"kubernetes.io/projected/b691bd15-43f8-4823-917b-7c27b8ca4ba6-kube-api-access-9w4w5\") pod \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.749065 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-util\") pod \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.749127 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-bundle\") pod \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.749457 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94cgm\" (UniqueName: \"kubernetes.io/projected/a2f76b8f-1fff-44e6-931b-d35852c1ab04-kube-api-access-94cgm\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.749479 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.750002 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-util" (OuterVolumeSpecName: "util") pod "b691bd15-43f8-4823-917b-7c27b8ca4ba6" (UID: "b691bd15-43f8-4823-917b-7c27b8ca4ba6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.754660 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-bundle" (OuterVolumeSpecName: "bundle") pod "b691bd15-43f8-4823-917b-7c27b8ca4ba6" (UID: "b691bd15-43f8-4823-917b-7c27b8ca4ba6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.851209 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.851285 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.236992 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.237013 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" event={"ID":"a2f76b8f-1fff-44e6-931b-d35852c1ab04","Type":"ContainerDied","Data":"7aa0e2182016394b47444a51b40eb5073bda21f911c0c534cca66600027c5597"} Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.237511 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aa0e2182016394b47444a51b40eb5073bda21f911c0c534cca66600027c5597" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.240126 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" event={"ID":"b691bd15-43f8-4823-917b-7c27b8ca4ba6","Type":"ContainerDied","Data":"d14c9322216608ff3fd9b4c5f70c9086a5972c70a87762641033ea553f1b5def"} Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.240175 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.240177 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d14c9322216608ff3fd9b4c5f70c9086a5972c70a87762641033ea553f1b5def" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.660233 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b691bd15-43f8-4823-917b-7c27b8ca4ba6-kube-api-access-9w4w5" (OuterVolumeSpecName: "kube-api-access-9w4w5") pod "b691bd15-43f8-4823-917b-7c27b8ca4ba6" (UID: "b691bd15-43f8-4823-917b-7c27b8ca4ba6"). InnerVolumeSpecName "kube-api-access-9w4w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.661465 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w4w5\" (UniqueName: \"kubernetes.io/projected/b691bd15-43f8-4823-917b-7c27b8ca4ba6-kube-api-access-9w4w5\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.661695 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-util" (OuterVolumeSpecName: "util") pod "a2f76b8f-1fff-44e6-931b-d35852c1ab04" (UID: "a2f76b8f-1fff-44e6-931b-d35852c1ab04"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.763238 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:23 crc kubenswrapper[4985]: I0128 18:26:23.248846 4985 generic.go:334] "Generic (PLEG): container finished" podID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerID="c0101ba127274bf28c8cc50d2966b9e93977f192a37fbe59aa75129ed11ee8f9" exitCode=0 Jan 28 18:26:23 crc kubenswrapper[4985]: I0128 18:26:23.248900 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerDied","Data":"c0101ba127274bf28c8cc50d2966b9e93977f192a37fbe59aa75129ed11ee8f9"} Jan 28 18:26:24 crc kubenswrapper[4985]: I0128 18:26:24.256156 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerStarted","Data":"5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98"} Jan 28 18:26:24 crc kubenswrapper[4985]: I0128 18:26:24.279788 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4dzwh" podStartSLOduration=2.818329796 podStartE2EDuration="6.279771901s" podCreationTimestamp="2026-01-28 18:26:18 +0000 UTC" firstStartedPulling="2026-01-28 18:26:20.205065082 +0000 UTC m=+791.031627903" lastFinishedPulling="2026-01-28 18:26:23.666507187 +0000 UTC m=+794.493070008" observedRunningTime="2026-01-28 18:26:24.277689832 +0000 UTC m=+795.104252653" watchObservedRunningTime="2026-01-28 18:26:24.279771901 +0000 UTC m=+795.106334722" Jan 28 18:26:29 crc kubenswrapper[4985]: I0128 18:26:29.189845 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:29 crc kubenswrapper[4985]: I0128 18:26:29.190199 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:30 crc kubenswrapper[4985]: I0128 18:26:30.232123 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4dzwh" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="registry-server" probeResult="failure" output=< Jan 28 18:26:30 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:26:30 crc kubenswrapper[4985]: > Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.969219 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj"] Jan 28 18:26:31 crc kubenswrapper[4985]: E0128 18:26:31.969929 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="pull" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.969949 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="pull" Jan 28 18:26:31 crc kubenswrapper[4985]: E0128 18:26:31.969963 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="util" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.969971 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="util" Jan 28 18:26:31 crc kubenswrapper[4985]: E0128 18:26:31.969984 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="pull" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.969992 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="pull" Jan 28 18:26:31 crc kubenswrapper[4985]: E0128 18:26:31.970008 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="util" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.970015 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="util" Jan 28 18:26:31 crc kubenswrapper[4985]: E0128 18:26:31.970026 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="extract" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.970033 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="extract" Jan 28 18:26:31 crc kubenswrapper[4985]: E0128 18:26:31.970046 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="extract" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.970053 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="extract" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.970214 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="extract" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.970238 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="extract" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.971136 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.974067 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.974181 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.975075 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.975492 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.975753 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.976090 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-mn6br" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.995950 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj"] Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.097776 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.097823 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-apiservice-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.097847 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/fc080bc5-4b4f-4405-b458-7450aaf8714b-manager-config\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.097867 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-webhook-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.098109 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zstl9\" (UniqueName: \"kubernetes.io/projected/fc080bc5-4b4f-4405-b458-7450aaf8714b-kube-api-access-zstl9\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.199098 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.199155 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-apiservice-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.199187 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/fc080bc5-4b4f-4405-b458-7450aaf8714b-manager-config\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.199211 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-webhook-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.199251 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zstl9\" (UniqueName: \"kubernetes.io/projected/fc080bc5-4b4f-4405-b458-7450aaf8714b-kube-api-access-zstl9\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.200081 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/fc080bc5-4b4f-4405-b458-7450aaf8714b-manager-config\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.206727 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-apiservice-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.211048 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-webhook-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.228963 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.232060 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zstl9\" (UniqueName: \"kubernetes.io/projected/fc080bc5-4b4f-4405-b458-7450aaf8714b-kube-api-access-zstl9\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.287869 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.730562 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj"] Jan 28 18:26:32 crc kubenswrapper[4985]: W0128 18:26:32.736946 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc080bc5_4b4f_4405_b458_7450aaf8714b.slice/crio-b4d85b1e81cf2e318d4242fc41c67acc871047936680b26a3c26a77ef6d9db0c WatchSource:0}: Error finding container b4d85b1e81cf2e318d4242fc41c67acc871047936680b26a3c26a77ef6d9db0c: Status 404 returned error can't find the container with id b4d85b1e81cf2e318d4242fc41c67acc871047936680b26a3c26a77ef6d9db0c Jan 28 18:26:33 crc kubenswrapper[4985]: I0128 18:26:33.331103 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" event={"ID":"fc080bc5-4b4f-4405-b458-7450aaf8714b","Type":"ContainerStarted","Data":"b4d85b1e81cf2e318d4242fc41c67acc871047936680b26a3c26a77ef6d9db0c"} Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.407826 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5"] Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.408880 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.410677 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.410891 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.411201 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-lmv4l" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.425338 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5"] Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.564438 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6wwk\" (UniqueName: \"kubernetes.io/projected/4db97b28-803f-4b66-9322-f210440517ff-kube-api-access-j6wwk\") pod \"cluster-logging-operator-79cf69ddc8-d28w5\" (UID: \"4db97b28-803f-4b66-9322-f210440517ff\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.666510 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6wwk\" (UniqueName: \"kubernetes.io/projected/4db97b28-803f-4b66-9322-f210440517ff-kube-api-access-j6wwk\") pod \"cluster-logging-operator-79cf69ddc8-d28w5\" (UID: \"4db97b28-803f-4b66-9322-f210440517ff\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.692361 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6wwk\" (UniqueName: \"kubernetes.io/projected/4db97b28-803f-4b66-9322-f210440517ff-kube-api-access-j6wwk\") pod \"cluster-logging-operator-79cf69ddc8-d28w5\" (UID: \"4db97b28-803f-4b66-9322-f210440517ff\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.730938 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" Jan 28 18:26:38 crc kubenswrapper[4985]: I0128 18:26:38.065612 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5"] Jan 28 18:26:38 crc kubenswrapper[4985]: W0128 18:26:38.073432 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4db97b28_803f_4b66_9322_f210440517ff.slice/crio-f5e33ac8d78cd1e86fb00895970e81656e51f1b0ba4ad4d18bdcd27a430d89b6 WatchSource:0}: Error finding container f5e33ac8d78cd1e86fb00895970e81656e51f1b0ba4ad4d18bdcd27a430d89b6: Status 404 returned error can't find the container with id f5e33ac8d78cd1e86fb00895970e81656e51f1b0ba4ad4d18bdcd27a430d89b6 Jan 28 18:26:38 crc kubenswrapper[4985]: I0128 18:26:38.364274 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" event={"ID":"4db97b28-803f-4b66-9322-f210440517ff","Type":"ContainerStarted","Data":"f5e33ac8d78cd1e86fb00895970e81656e51f1b0ba4ad4d18bdcd27a430d89b6"} Jan 28 18:26:38 crc kubenswrapper[4985]: I0128 18:26:38.365903 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" event={"ID":"fc080bc5-4b4f-4405-b458-7450aaf8714b","Type":"ContainerStarted","Data":"e91c414e4bddd6fb7b100b376f20e51c053f866b5e844a819f4081df4b77080f"} Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.248177 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.298439 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.672228 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-92xk4"] Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.673789 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.682821 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92xk4"] Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.827072 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-utilities\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.827357 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glgsn\" (UniqueName: \"kubernetes.io/projected/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-kube-api-access-glgsn\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.827459 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-catalog-content\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.929183 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-utilities\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.929333 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glgsn\" (UniqueName: \"kubernetes.io/projected/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-kube-api-access-glgsn\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.929372 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-catalog-content\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.929810 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-utilities\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.929911 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-catalog-content\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.951395 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glgsn\" (UniqueName: \"kubernetes.io/projected/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-kube-api-access-glgsn\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:40 crc kubenswrapper[4985]: I0128 18:26:40.012167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:40 crc kubenswrapper[4985]: I0128 18:26:40.651240 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92xk4"] Jan 28 18:26:40 crc kubenswrapper[4985]: W0128 18:26:40.673173 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod869b5731_3bfc_4db2_af7e_a065f8fbcf0f.slice/crio-488334f3f6fbb34a19e19115e46d9ed76de4efd03f74ca396d15a7e5d31b3c52 WatchSource:0}: Error finding container 488334f3f6fbb34a19e19115e46d9ed76de4efd03f74ca396d15a7e5d31b3c52: Status 404 returned error can't find the container with id 488334f3f6fbb34a19e19115e46d9ed76de4efd03f74ca396d15a7e5d31b3c52 Jan 28 18:26:41 crc kubenswrapper[4985]: I0128 18:26:41.186357 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:26:41 crc kubenswrapper[4985]: I0128 18:26:41.186807 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:26:41 crc kubenswrapper[4985]: I0128 18:26:41.414759 4985 generic.go:334] "Generic (PLEG): container finished" podID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerID="0d1f250737c643fbc85140566ed81835e3f4db2d92ec1ed36f15c0c9eb2c030a" exitCode=0 Jan 28 18:26:41 crc kubenswrapper[4985]: I0128 18:26:41.414807 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92xk4" event={"ID":"869b5731-3bfc-4db2-af7e-a065f8fbcf0f","Type":"ContainerDied","Data":"0d1f250737c643fbc85140566ed81835e3f4db2d92ec1ed36f15c0c9eb2c030a"} Jan 28 18:26:41 crc kubenswrapper[4985]: I0128 18:26:41.414838 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92xk4" event={"ID":"869b5731-3bfc-4db2-af7e-a065f8fbcf0f","Type":"ContainerStarted","Data":"488334f3f6fbb34a19e19115e46d9ed76de4efd03f74ca396d15a7e5d31b3c52"} Jan 28 18:26:43 crc kubenswrapper[4985]: I0128 18:26:43.256689 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dzwh"] Jan 28 18:26:43 crc kubenswrapper[4985]: I0128 18:26:43.257166 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4dzwh" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="registry-server" containerID="cri-o://5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98" gracePeriod=2 Jan 28 18:26:43 crc kubenswrapper[4985]: I0128 18:26:43.433116 4985 generic.go:334] "Generic (PLEG): container finished" podID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerID="5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98" exitCode=0 Jan 28 18:26:43 crc kubenswrapper[4985]: I0128 18:26:43.433201 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerDied","Data":"5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98"} Jan 28 18:26:49 crc kubenswrapper[4985]: E0128 18:26:49.190219 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98 is running failed: container process not found" containerID="5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:26:49 crc kubenswrapper[4985]: E0128 18:26:49.191163 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98 is running failed: container process not found" containerID="5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:26:49 crc kubenswrapper[4985]: E0128 18:26:49.191536 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98 is running failed: container process not found" containerID="5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:26:49 crc kubenswrapper[4985]: E0128 18:26:49.191571 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-4dzwh" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="registry-server" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.678799 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.777777 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-catalog-content\") pod \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.778191 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-utilities\") pod \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.778393 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4b44\" (UniqueName: \"kubernetes.io/projected/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-kube-api-access-v4b44\") pod \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.780017 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-utilities" (OuterVolumeSpecName: "utilities") pod "6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" (UID: "6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.796661 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-kube-api-access-v4b44" (OuterVolumeSpecName: "kube-api-access-v4b44") pod "6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" (UID: "6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9"). InnerVolumeSpecName "kube-api-access-v4b44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.880433 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.880651 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4b44\" (UniqueName: \"kubernetes.io/projected/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-kube-api-access-v4b44\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.930075 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" (UID: "6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.982588 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.480579 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" event={"ID":"fc080bc5-4b4f-4405-b458-7450aaf8714b","Type":"ContainerStarted","Data":"b2537536e480df8807fbf335c3a21af976e198c4fcbd7f19aee7615203234ab0"} Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.482041 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.483923 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.484684 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerDied","Data":"edef8aac6c8d1e61396f10082b442134209abcac77fca9ab8eefd215fc05cb14"} Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.484779 4985 scope.go:117] "RemoveContainer" containerID="5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.484716 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.486492 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" event={"ID":"4db97b28-803f-4b66-9322-f210440517ff","Type":"ContainerStarted","Data":"ac84eec0161e8817b9ff325278032ec77effc79279e7d70fe1c3a60cd6c6aa23"} Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.488701 4985 generic.go:334] "Generic (PLEG): container finished" podID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerID="191c84609dfb2c8268e33648b1fa5d4251ffb2f7286e97b627cb86dee2d94615" exitCode=0 Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.488831 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92xk4" event={"ID":"869b5731-3bfc-4db2-af7e-a065f8fbcf0f","Type":"ContainerDied","Data":"191c84609dfb2c8268e33648b1fa5d4251ffb2f7286e97b627cb86dee2d94615"} Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.500686 4985 scope.go:117] "RemoveContainer" containerID="c0101ba127274bf28c8cc50d2966b9e93977f192a37fbe59aa75129ed11ee8f9" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.516916 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podStartSLOduration=2.495678618 podStartE2EDuration="19.516891185s" podCreationTimestamp="2026-01-28 18:26:31 +0000 UTC" firstStartedPulling="2026-01-28 18:26:32.740422625 +0000 UTC m=+803.566985486" lastFinishedPulling="2026-01-28 18:26:49.761635232 +0000 UTC m=+820.588198053" observedRunningTime="2026-01-28 18:26:50.507992953 +0000 UTC m=+821.334555774" watchObservedRunningTime="2026-01-28 18:26:50.516891185 +0000 UTC m=+821.343454066" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.534217 4985 scope.go:117] "RemoveContainer" containerID="e8d028e6fa502a4926094f90447dd5b0dfaa5b2776af57350b61ce63ec91efa8" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.563164 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dzwh"] Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.574731 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4dzwh"] Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.612631 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" podStartSLOduration=2.9341536870000002 podStartE2EDuration="14.612610467s" podCreationTimestamp="2026-01-28 18:26:36 +0000 UTC" firstStartedPulling="2026-01-28 18:26:38.0760221 +0000 UTC m=+808.902584921" lastFinishedPulling="2026-01-28 18:26:49.75447888 +0000 UTC m=+820.581041701" observedRunningTime="2026-01-28 18:26:50.606743111 +0000 UTC m=+821.433305942" watchObservedRunningTime="2026-01-28 18:26:50.612610467 +0000 UTC m=+821.439173288" Jan 28 18:26:51 crc kubenswrapper[4985]: I0128 18:26:51.272687 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" path="/var/lib/kubelet/pods/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9/volumes" Jan 28 18:26:52 crc kubenswrapper[4985]: I0128 18:26:52.506682 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92xk4" event={"ID":"869b5731-3bfc-4db2-af7e-a065f8fbcf0f","Type":"ContainerStarted","Data":"d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96"} Jan 28 18:26:52 crc kubenswrapper[4985]: I0128 18:26:52.536661 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-92xk4" podStartSLOduration=3.272692851 podStartE2EDuration="13.536628356s" podCreationTimestamp="2026-01-28 18:26:39 +0000 UTC" firstStartedPulling="2026-01-28 18:26:41.419973537 +0000 UTC m=+812.246536368" lastFinishedPulling="2026-01-28 18:26:51.683909052 +0000 UTC m=+822.510471873" observedRunningTime="2026-01-28 18:26:52.532746707 +0000 UTC m=+823.359309528" watchObservedRunningTime="2026-01-28 18:26:52.536628356 +0000 UTC m=+823.363191177" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.490932 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 28 18:26:55 crc kubenswrapper[4985]: E0128 18:26:55.491829 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="extract-utilities" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.491859 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="extract-utilities" Jan 28 18:26:55 crc kubenswrapper[4985]: E0128 18:26:55.491884 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="extract-content" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.491892 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="extract-content" Jan 28 18:26:55 crc kubenswrapper[4985]: E0128 18:26:55.491928 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="registry-server" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.491936 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="registry-server" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.496089 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="registry-server" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.497441 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.503366 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.503844 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.506365 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.659473 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f1781877-5af0-43d7-931c-0b572cde5552\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1781877-5af0-43d7-931c-0b572cde5552\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") " pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.659554 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk4kd\" (UniqueName: \"kubernetes.io/projected/8fa05e4c-a197-4caa-baff-285c1b90274b-kube-api-access-nk4kd\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") " pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.761163 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f1781877-5af0-43d7-931c-0b572cde5552\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1781877-5af0-43d7-931c-0b572cde5552\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") " pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.761233 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk4kd\" (UniqueName: \"kubernetes.io/projected/8fa05e4c-a197-4caa-baff-285c1b90274b-kube-api-access-nk4kd\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") " pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.764161 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.764190 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f1781877-5af0-43d7-931c-0b572cde5552\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1781877-5af0-43d7-931c-0b572cde5552\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0dfdd8f7ea2c81834327a58594b515cf36ff0ea5bd50ef20152bed47b4a10073/globalmount\"" pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.792458 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f1781877-5af0-43d7-931c-0b572cde5552\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1781877-5af0-43d7-931c-0b572cde5552\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") " pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.801185 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk4kd\" (UniqueName: \"kubernetes.io/projected/8fa05e4c-a197-4caa-baff-285c1b90274b-kube-api-access-nk4kd\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") " pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.837466 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 28 18:26:56 crc kubenswrapper[4985]: I0128 18:26:56.295424 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 28 18:26:56 crc kubenswrapper[4985]: I0128 18:26:56.536933 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"8fa05e4c-a197-4caa-baff-285c1b90274b","Type":"ContainerStarted","Data":"17d7018e282ed9af8dfe1fbe0dabcb857f595e3642584d4d21030b809487c064"} Jan 28 18:27:00 crc kubenswrapper[4985]: I0128 18:27:00.015463 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:27:00 crc kubenswrapper[4985]: I0128 18:27:00.016072 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:27:00 crc kubenswrapper[4985]: I0128 18:27:00.098012 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:27:00 crc kubenswrapper[4985]: I0128 18:27:00.605204 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:27:00 crc kubenswrapper[4985]: I0128 18:27:00.648452 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92xk4"] Jan 28 18:27:02 crc kubenswrapper[4985]: I0128 18:27:02.596794 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-92xk4" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="registry-server" containerID="cri-o://d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96" gracePeriod=2 Jan 28 18:27:05 crc kubenswrapper[4985]: I0128 18:27:05.628245 4985 generic.go:334] "Generic (PLEG): container finished" podID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerID="d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96" exitCode=0 Jan 28 18:27:05 crc kubenswrapper[4985]: I0128 18:27:05.628274 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92xk4" event={"ID":"869b5731-3bfc-4db2-af7e-a065f8fbcf0f","Type":"ContainerDied","Data":"d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96"} Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.012984 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96 is running failed: container process not found" containerID="d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.014142 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96 is running failed: container process not found" containerID="d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.016210 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96 is running failed: container process not found" containerID="d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.016304 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-92xk4" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="registry-server" Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.643677 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/minio/minio:latest" Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.644133 4985 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 18:27:10 crc kubenswrapper[4985]: container &Container{Name:minio,Image:quay.io/minio/minio:latest,Command:[/bin/bash -c mkdir -p /data/loki && \ Jan 28 18:27:10 crc kubenswrapper[4985]: minio server /data Jan 28 18:27:10 crc kubenswrapper[4985]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:MINIO_ACCESS_KEY,Value:minio,ValueFrom:nil,},EnvVar{Name:MINIO_SECRET_KEY,Value:minio123,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:storage,ReadOnly:false,MountPath:/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nk4kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod minio_minio-dev(8fa05e4c-a197-4caa-baff-285c1b90274b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Jan 28 18:27:10 crc kubenswrapper[4985]: > logger="UnhandledError" Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.645280 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minio\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="minio-dev/minio" podUID="8fa05e4c-a197-4caa-baff-285c1b90274b" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.678810 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92xk4" event={"ID":"869b5731-3bfc-4db2-af7e-a065f8fbcf0f","Type":"ContainerDied","Data":"488334f3f6fbb34a19e19115e46d9ed76de4efd03f74ca396d15a7e5d31b3c52"} Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.678860 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="488334f3f6fbb34a19e19115e46d9ed76de4efd03f74ca396d15a7e5d31b3c52" Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.680077 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minio\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/minio/minio:latest\\\"\"" pod="minio-dev/minio" podUID="8fa05e4c-a197-4caa-baff-285c1b90274b" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.681325 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.808845 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-utilities\") pod \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.808962 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-catalog-content\") pod \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.809044 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glgsn\" (UniqueName: \"kubernetes.io/projected/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-kube-api-access-glgsn\") pod \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.810235 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-utilities" (OuterVolumeSpecName: "utilities") pod "869b5731-3bfc-4db2-af7e-a065f8fbcf0f" (UID: "869b5731-3bfc-4db2-af7e-a065f8fbcf0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.816630 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-kube-api-access-glgsn" (OuterVolumeSpecName: "kube-api-access-glgsn") pod "869b5731-3bfc-4db2-af7e-a065f8fbcf0f" (UID: "869b5731-3bfc-4db2-af7e-a065f8fbcf0f"). InnerVolumeSpecName "kube-api-access-glgsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.880987 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "869b5731-3bfc-4db2-af7e-a065f8fbcf0f" (UID: "869b5731-3bfc-4db2-af7e-a065f8fbcf0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.911281 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glgsn\" (UniqueName: \"kubernetes.io/projected/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-kube-api-access-glgsn\") on node \"crc\" DevicePath \"\"" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.911317 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.911333 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.185995 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.186100 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.186163 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.186957 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"adb4c0ed7f790cd18a413d636ed6bf707c0edf095d524face3ee33b0664e4ff2"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.187040 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://adb4c0ed7f790cd18a413d636ed6bf707c0edf095d524face3ee33b0664e4ff2" gracePeriod=600 Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.694357 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="adb4c0ed7f790cd18a413d636ed6bf707c0edf095d524face3ee33b0664e4ff2" exitCode=0 Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.694966 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.695845 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"adb4c0ed7f790cd18a413d636ed6bf707c0edf095d524face3ee33b0664e4ff2"} Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.695890 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"040e45270fd174720803f9ffa3b825437d4522dc625dae36be2468e03f889dab"} Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.696090 4985 scope.go:117] "RemoveContainer" containerID="7f63b5a5d82d462357c3a92eda8a9e8dafecb82cb35862cc75804b4a50b4c56e" Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.724196 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92xk4"] Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.731559 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-92xk4"] Jan 28 18:27:13 crc kubenswrapper[4985]: I0128 18:27:13.277121 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" path="/var/lib/kubelet/pods/869b5731-3bfc-4db2-af7e-a065f8fbcf0f/volumes" Jan 28 18:27:26 crc kubenswrapper[4985]: I0128 18:27:26.821470 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"8fa05e4c-a197-4caa-baff-285c1b90274b","Type":"ContainerStarted","Data":"247208a62a9fd9696af76842086b6539ee86ffefaec40a46abe8dc43f1f10530"} Jan 28 18:27:26 crc kubenswrapper[4985]: I0128 18:27:26.849949 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.46905453 podStartE2EDuration="33.849922768s" podCreationTimestamp="2026-01-28 18:26:53 +0000 UTC" firstStartedPulling="2026-01-28 18:26:56.306765246 +0000 UTC m=+827.133328067" lastFinishedPulling="2026-01-28 18:27:25.687633444 +0000 UTC m=+856.514196305" observedRunningTime="2026-01-28 18:27:26.840568004 +0000 UTC m=+857.667130865" watchObservedRunningTime="2026-01-28 18:27:26.849922768 +0000 UTC m=+857.676485629" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.327627 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-2755m"] Jan 28 18:27:33 crc kubenswrapper[4985]: E0128 18:27:33.328466 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="extract-utilities" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.328480 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="extract-utilities" Jan 28 18:27:33 crc kubenswrapper[4985]: E0128 18:27:33.328500 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="extract-content" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.328507 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="extract-content" Jan 28 18:27:33 crc kubenswrapper[4985]: E0128 18:27:33.328518 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="registry-server" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.328524 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="registry-server" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.328630 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="registry-server" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.329080 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.333505 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-stzxf" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.333822 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.333955 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.334063 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.334209 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.340069 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-2755m"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.470184 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-dkn9m"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.471632 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.478504 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.478886 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.479053 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.493281 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-dkn9m"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.532188 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-config\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.532682 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.532830 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.533015 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5djgj\" (UniqueName: \"kubernetes.io/projected/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-kube-api-access-5djgj\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.533228 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.556903 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.557712 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.559225 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.562308 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.569685 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634389 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5djgj\" (UniqueName: \"kubernetes.io/projected/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-kube-api-access-5djgj\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634712 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634752 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634778 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-s3\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634811 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634833 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-config\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634862 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqsww\" (UniqueName: \"kubernetes.io/projected/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-kube-api-access-jqsww\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634882 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634922 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634946 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-config\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.635914 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.636390 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-config\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.654351 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.654366 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.665994 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-76696895d9-g5tqr"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.676675 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.686524 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.686894 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-2pqzh" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.687092 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.687391 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.687663 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.687857 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.703486 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5djgj\" (UniqueName: \"kubernetes.io/projected/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-kube-api-access-5djgj\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.730357 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76696895d9-g5tqr"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736128 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-s3\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736175 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736201 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-config\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736232 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736354 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736380 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqsww\" (UniqueName: \"kubernetes.io/projected/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-kube-api-access-jqsww\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736403 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736423 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8gzq\" (UniqueName: \"kubernetes.io/projected/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-kube-api-access-f8gzq\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736442 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-config\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736488 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736506 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.737978 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.744684 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-76696895d9-c6d96"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.744863 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.746078 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.750017 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76696895d9-c6d96"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.751727 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-config\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.756121 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-s3\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.761875 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.766089 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqsww\" (UniqueName: \"kubernetes.io/projected/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-kube-api-access-jqsww\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.793289 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838048 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838121 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8gzq\" (UniqueName: \"kubernetes.io/projected/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-kube-api-access-f8gzq\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838155 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5zhn\" (UniqueName: \"kubernetes.io/projected/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-kube-api-access-n5zhn\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838185 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-rbac\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838364 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-lokistack-gateway\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838453 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-lokistack-gateway\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838487 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838509 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838528 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-tls-secret\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838545 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838586 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqps9\" (UniqueName: \"kubernetes.io/projected/ae6864ac-d6e2-4d85-aa84-361f51b944eb-kube-api-access-mqps9\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838642 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838668 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-tenants\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838696 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-tls-secret\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838716 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-config\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838741 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838757 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-tenants\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838786 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-rbac\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838807 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838824 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838845 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.839810 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.841881 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-config\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.842092 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.842616 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.860464 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8gzq\" (UniqueName: \"kubernetes.io/projected/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-kube-api-access-f8gzq\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.877443 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940603 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5zhn\" (UniqueName: \"kubernetes.io/projected/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-kube-api-access-n5zhn\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940657 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-rbac\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940710 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-lokistack-gateway\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940753 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-lokistack-gateway\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940781 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940802 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-tls-secret\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940825 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940848 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqps9\" (UniqueName: \"kubernetes.io/projected/ae6864ac-d6e2-4d85-aa84-361f51b944eb-kube-api-access-mqps9\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940886 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-tenants\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940918 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-tls-secret\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940942 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940965 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-tenants\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940992 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-rbac\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.941012 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.941031 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.941055 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.942380 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-lokistack-gateway\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.944782 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.944903 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-rbac\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.946474 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-tenants\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.946662 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-tls-secret\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.947523 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.947613 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.947982 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-rbac\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.958360 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.958686 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.958793 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-lokistack-gateway\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.961506 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.961724 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqps9\" (UniqueName: \"kubernetes.io/projected/ae6864ac-d6e2-4d85-aa84-361f51b944eb-kube-api-access-mqps9\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.961883 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.963854 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-tls-secret\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.966454 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5zhn\" (UniqueName: \"kubernetes.io/projected/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-kube-api-access-n5zhn\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.966459 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-tenants\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.035492 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.105714 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.230942 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-dkn9m"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.359785 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x"] Jan 28 18:27:34 crc kubenswrapper[4985]: W0128 18:27:34.363598 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c56d4fe_62c7_47ef_9a0f_607d899d19b8.slice/crio-7451ce76c0eeac02a853d076996cdb46adc418e5efa56e5641b4213b58bbfa0e WatchSource:0}: Error finding container 7451ce76c0eeac02a853d076996cdb46adc418e5efa56e5641b4213b58bbfa0e: Status 404 returned error can't find the container with id 7451ce76c0eeac02a853d076996cdb46adc418e5efa56e5641b4213b58bbfa0e Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.467376 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-2755m"] Jan 28 18:27:34 crc kubenswrapper[4985]: W0128 18:27:34.468347 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeffc2fb2_2eb7_4ea0_abf1_0d43bde4adeb.slice/crio-4b46b6dbff4cc34d5bbe20467db5806c99ab61783cfb50c150fea3d55b94fd7d WatchSource:0}: Error finding container 4b46b6dbff4cc34d5bbe20467db5806c99ab61783cfb50c150fea3d55b94fd7d: Status 404 returned error can't find the container with id 4b46b6dbff4cc34d5bbe20467db5806c99ab61783cfb50c150fea3d55b94fd7d Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.488196 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.489300 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.492866 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.493796 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 28 18:27:34 crc kubenswrapper[4985]: W0128 18:27:34.499530 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae6864ac_d6e2_4d85_aa84_361f51b944eb.slice/crio-98c517ac65262433c9fc503fa4e0561a169da5ada6742db210ebff64d028673a WatchSource:0}: Error finding container 98c517ac65262433c9fc503fa4e0561a169da5ada6742db210ebff64d028673a: Status 404 returned error can't find the container with id 98c517ac65262433c9fc503fa4e0561a169da5ada6742db210ebff64d028673a Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.501536 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76696895d9-g5tqr"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.511082 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.537072 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.537922 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.539719 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.542077 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.549439 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.579585 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76696895d9-c6d96"] Jan 28 18:27:34 crc kubenswrapper[4985]: W0128 18:27:34.582852 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02e0988e_bb4d_4c63_a4aa_3f1432a1ee7b.slice/crio-ce25d9d30a9830dda3a3182457002b75267272cafe29fa8789893581aa5cb516 WatchSource:0}: Error finding container ce25d9d30a9830dda3a3182457002b75267272cafe29fa8789893581aa5cb516: Status 404 returned error can't find the container with id ce25d9d30a9830dda3a3182457002b75267272cafe29fa8789893581aa5cb516 Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.615672 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.616935 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.619410 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.619609 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.621968 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.657841 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.657882 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e322915e-933c-4de4-98dd-ef047ee5b056-config\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.657912 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e5484364-652f-4506-b78b-405e87866424\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5484364-652f-4506-b78b-405e87866424\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658031 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658070 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnn5\" (UniqueName: \"kubernetes.io/projected/e322915e-933c-4de4-98dd-ef047ee5b056-kube-api-access-wtnn5\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658109 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658145 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658218 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658239 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac72f54d-936d-4c98-9f91-918f7a05b5d1-config\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658279 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsxk6\" (UniqueName: \"kubernetes.io/projected/ac72f54d-936d-4c98-9f91-918f7a05b5d1-kube-api-access-bsxk6\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658305 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658337 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658378 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658461 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658477 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.759808 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.759865 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.759930 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e5484364-652f-4506-b78b-405e87866424\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5484364-652f-4506-b78b-405e87866424\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.759964 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760028 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2dwn\" (UniqueName: \"kubernetes.io/projected/664a7afe-25ae-45f8-81bd-9a9c59c431cd-kube-api-access-w2dwn\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760057 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760084 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760173 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760230 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760304 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760501 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760546 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e322915e-933c-4de4-98dd-ef047ee5b056-config\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760568 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760605 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760638 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnn5\" (UniqueName: \"kubernetes.io/projected/e322915e-933c-4de4-98dd-ef047ee5b056-kube-api-access-wtnn5\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760667 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760705 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664a7afe-25ae-45f8-81bd-9a9c59c431cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760733 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760758 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac72f54d-936d-4c98-9f91-918f7a05b5d1-config\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760780 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsxk6\" (UniqueName: \"kubernetes.io/projected/ac72f54d-936d-4c98-9f91-918f7a05b5d1-kube-api-access-bsxk6\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760806 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760833 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.761715 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e322915e-933c-4de4-98dd-ef047ee5b056-config\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.762195 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.762797 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.764575 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac72f54d-936d-4c98-9f91-918f7a05b5d1-config\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.766212 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.766231 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.767168 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.767378 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.767416 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e5484364-652f-4506-b78b-405e87866424\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5484364-652f-4506-b78b-405e87866424\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9d34aebcddf21e72b6271ca9fd89e77f2902f6b93aa7b3d4cec0d014dfe6e8f6/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.767520 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.767583 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0a50106bc928bdbed945f7ef72ab597a68c4c7a6f33ecb55fb4d0f537b7d613d/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.767736 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.768018 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.768054 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f86c0bdb72dc4e631fa3430a68d817f45a059b0d41cd015f7b9c23b2d7dc03d4/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.770026 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.771831 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.778368 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsxk6\" (UniqueName: \"kubernetes.io/projected/ac72f54d-936d-4c98-9f91-918f7a05b5d1-kube-api-access-bsxk6\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.784120 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnn5\" (UniqueName: \"kubernetes.io/projected/e322915e-933c-4de4-98dd-ef047ee5b056-kube-api-access-wtnn5\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.794646 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.795234 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.797876 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e5484364-652f-4506-b78b-405e87866424\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5484364-652f-4506-b78b-405e87866424\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.816782 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.858939 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.862507 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.862684 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.862713 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2dwn\" (UniqueName: \"kubernetes.io/projected/664a7afe-25ae-45f8-81bd-9a9c59c431cd-kube-api-access-w2dwn\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.862738 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.863152 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.863588 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.863639 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664a7afe-25ae-45f8-81bd-9a9c59c431cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.864591 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664a7afe-25ae-45f8-81bd-9a9c59c431cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.865011 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.866489 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.866767 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.866828 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da5ff38c7787397afb3cc363a26e7e8fa9ae822407f71e523b9148e301f40a94/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.866861 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.867052 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.880705 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2dwn\" (UniqueName: \"kubernetes.io/projected/664a7afe-25ae-45f8-81bd-9a9c59c431cd-kube-api-access-w2dwn\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.882712 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" event={"ID":"5c56d4fe-62c7-47ef-9a0f-607d899d19b8","Type":"ContainerStarted","Data":"7451ce76c0eeac02a853d076996cdb46adc418e5efa56e5641b4213b58bbfa0e"} Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.885188 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" event={"ID":"ae6864ac-d6e2-4d85-aa84-361f51b944eb","Type":"ContainerStarted","Data":"98c517ac65262433c9fc503fa4e0561a169da5ada6742db210ebff64d028673a"} Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.886221 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" event={"ID":"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b","Type":"ContainerStarted","Data":"ce25d9d30a9830dda3a3182457002b75267272cafe29fa8789893581aa5cb516"} Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.901114 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" event={"ID":"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb","Type":"ContainerStarted","Data":"4b46b6dbff4cc34d5bbe20467db5806c99ab61783cfb50c150fea3d55b94fd7d"} Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.918656 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" event={"ID":"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7","Type":"ContainerStarted","Data":"2b9d1e6ddcc3d486b25b59b9b3b27d1121412cfc510cc740b881f81c041aed0d"} Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.923857 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.931366 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:35 crc kubenswrapper[4985]: I0128 18:27:35.222081 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 28 18:27:35 crc kubenswrapper[4985]: W0128 18:27:35.226279 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode322915e_933c_4de4_98dd_ef047ee5b056.slice/crio-b3aaf44848f1f5e8a2acf0691eac5c29fa36f7435b88db2f351a8b3d8a61251f WatchSource:0}: Error finding container b3aaf44848f1f5e8a2acf0691eac5c29fa36f7435b88db2f351a8b3d8a61251f: Status 404 returned error can't find the container with id b3aaf44848f1f5e8a2acf0691eac5c29fa36f7435b88db2f351a8b3d8a61251f Jan 28 18:27:35 crc kubenswrapper[4985]: I0128 18:27:35.293098 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 28 18:27:35 crc kubenswrapper[4985]: W0128 18:27:35.294619 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac72f54d_936d_4c98_9f91_918f7a05b5d1.slice/crio-05f29297abaf23f20c9b6aa2c33cf8d8235321abd64bd3311ec1f63133a5e51f WatchSource:0}: Error finding container 05f29297abaf23f20c9b6aa2c33cf8d8235321abd64bd3311ec1f63133a5e51f: Status 404 returned error can't find the container with id 05f29297abaf23f20c9b6aa2c33cf8d8235321abd64bd3311ec1f63133a5e51f Jan 28 18:27:35 crc kubenswrapper[4985]: I0128 18:27:35.347693 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 28 18:27:35 crc kubenswrapper[4985]: W0128 18:27:35.353785 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod664a7afe_25ae_45f8_81bd_9a9c59c431cd.slice/crio-882c8ccf382662e9661161384c9f7d44ee73628918020cd4930a4c8f50388135 WatchSource:0}: Error finding container 882c8ccf382662e9661161384c9f7d44ee73628918020cd4930a4c8f50388135: Status 404 returned error can't find the container with id 882c8ccf382662e9661161384c9f7d44ee73628918020cd4930a4c8f50388135 Jan 28 18:27:35 crc kubenswrapper[4985]: I0128 18:27:35.927999 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"664a7afe-25ae-45f8-81bd-9a9c59c431cd","Type":"ContainerStarted","Data":"882c8ccf382662e9661161384c9f7d44ee73628918020cd4930a4c8f50388135"} Jan 28 18:27:35 crc kubenswrapper[4985]: I0128 18:27:35.929992 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"e322915e-933c-4de4-98dd-ef047ee5b056","Type":"ContainerStarted","Data":"b3aaf44848f1f5e8a2acf0691eac5c29fa36f7435b88db2f351a8b3d8a61251f"} Jan 28 18:27:35 crc kubenswrapper[4985]: I0128 18:27:35.930910 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"ac72f54d-936d-4c98-9f91-918f7a05b5d1","Type":"ContainerStarted","Data":"05f29297abaf23f20c9b6aa2c33cf8d8235321abd64bd3311ec1f63133a5e51f"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.954946 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" event={"ID":"5c56d4fe-62c7-47ef-9a0f-607d899d19b8","Type":"ContainerStarted","Data":"e5d10ad440fd48d587173ef40bb25ee2c50f17e8dfd6388913a8ace6022d8276"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.955648 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.957749 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"664a7afe-25ae-45f8-81bd-9a9c59c431cd","Type":"ContainerStarted","Data":"979f7178decf96b036aeaeefc740956920aa4c3e7dea476507625e079d4bf654"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.957892 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.960346 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"e322915e-933c-4de4-98dd-ef047ee5b056","Type":"ContainerStarted","Data":"7b85fb0b4324d5d5159bd3e31814a9b315085473da50651a26099491a3cad1c7"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.960470 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.961853 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" event={"ID":"ae6864ac-d6e2-4d85-aa84-361f51b944eb","Type":"ContainerStarted","Data":"bb98b3a9ae24440a684bdc98d1f296c6416de56f94cf56c8e4ba101fe4b010ce"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.963515 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" event={"ID":"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b","Type":"ContainerStarted","Data":"dfb996e7fc5b44eebaffe384562e7c0762443e351a1b60cec569371d59fdefe2"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.965854 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"ac72f54d-936d-4c98-9f91-918f7a05b5d1","Type":"ContainerStarted","Data":"dfb3a36bbffe1a384711bb7726bff8c8c9f17845fb448441da4e2ac14e7a1ae9"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.965928 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.967873 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" event={"ID":"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7","Type":"ContainerStarted","Data":"1bc36136fdf9a9f030bacd5411ac681502b0ed109dc47735176020a3150e8b66"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.968088 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.969116 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" event={"ID":"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb","Type":"ContainerStarted","Data":"7ebdb1482b87e174d14ffaf25af81b75da2729b12bdcc6e6952a1b79ff2f49d4"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.969353 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.973126 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" podStartSLOduration=2.23186055 podStartE2EDuration="5.973110733s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:34.365685306 +0000 UTC m=+865.192248137" lastFinishedPulling="2026-01-28 18:27:38.106935499 +0000 UTC m=+868.933498320" observedRunningTime="2026-01-28 18:27:38.971852578 +0000 UTC m=+869.798415419" watchObservedRunningTime="2026-01-28 18:27:38.973110733 +0000 UTC m=+869.799673544" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.995267 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" podStartSLOduration=2.273232917 podStartE2EDuration="5.995241218s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:34.470881395 +0000 UTC m=+865.297444216" lastFinishedPulling="2026-01-28 18:27:38.192889656 +0000 UTC m=+869.019452517" observedRunningTime="2026-01-28 18:27:38.993999463 +0000 UTC m=+869.820562274" watchObservedRunningTime="2026-01-28 18:27:38.995241218 +0000 UTC m=+869.821804039" Jan 28 18:27:39 crc kubenswrapper[4985]: I0128 18:27:39.036539 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.078744418 podStartE2EDuration="6.036522793s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:35.228188525 +0000 UTC m=+866.054751336" lastFinishedPulling="2026-01-28 18:27:38.18596689 +0000 UTC m=+869.012529711" observedRunningTime="2026-01-28 18:27:39.019843412 +0000 UTC m=+869.846406253" watchObservedRunningTime="2026-01-28 18:27:39.036522793 +0000 UTC m=+869.863085604" Jan 28 18:27:39 crc kubenswrapper[4985]: I0128 18:27:39.037335 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" podStartSLOduration=2.072555832 podStartE2EDuration="6.037329606s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:34.243865986 +0000 UTC m=+865.070428807" lastFinishedPulling="2026-01-28 18:27:38.20863976 +0000 UTC m=+869.035202581" observedRunningTime="2026-01-28 18:27:39.031848191 +0000 UTC m=+869.858411012" watchObservedRunningTime="2026-01-28 18:27:39.037329606 +0000 UTC m=+869.863892427" Jan 28 18:27:39 crc kubenswrapper[4985]: I0128 18:27:39.055446 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.160743493 podStartE2EDuration="6.055428077s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:35.295908907 +0000 UTC m=+866.122471728" lastFinishedPulling="2026-01-28 18:27:38.190593491 +0000 UTC m=+869.017156312" observedRunningTime="2026-01-28 18:27:39.049430998 +0000 UTC m=+869.875993829" watchObservedRunningTime="2026-01-28 18:27:39.055428077 +0000 UTC m=+869.881990898" Jan 28 18:27:41 crc kubenswrapper[4985]: I0128 18:27:41.292096 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=5.467065757 podStartE2EDuration="8.292050193s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:35.359521583 +0000 UTC m=+866.186084404" lastFinishedPulling="2026-01-28 18:27:38.184506019 +0000 UTC m=+869.011068840" observedRunningTime="2026-01-28 18:27:39.069390111 +0000 UTC m=+869.895952952" watchObservedRunningTime="2026-01-28 18:27:41.292050193 +0000 UTC m=+872.118613014" Jan 28 18:27:41 crc kubenswrapper[4985]: I0128 18:27:41.997235 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" event={"ID":"ae6864ac-d6e2-4d85-aa84-361f51b944eb","Type":"ContainerStarted","Data":"49b1b47d70ef49d5d3c357e0e4c0260742a1e71fbda027d7a0c7b08b2240878f"} Jan 28 18:27:41 crc kubenswrapper[4985]: I0128 18:27:41.997607 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:41 crc kubenswrapper[4985]: I0128 18:27:41.997639 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.000451 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" event={"ID":"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b","Type":"ContainerStarted","Data":"bd57bd2da85666a901250eb2b260ff39ea755f7279264c3a5fa429402f673f0e"} Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.000720 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.000825 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.014692 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.015667 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.015863 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.016925 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.027145 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podStartSLOduration=2.170748443 podStartE2EDuration="9.027122645s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:34.505215524 +0000 UTC m=+865.331778345" lastFinishedPulling="2026-01-28 18:27:41.361589716 +0000 UTC m=+872.188152547" observedRunningTime="2026-01-28 18:27:42.026504587 +0000 UTC m=+872.853067438" watchObservedRunningTime="2026-01-28 18:27:42.027122645 +0000 UTC m=+872.853685466" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.079791 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podStartSLOduration=2.283214958 podStartE2EDuration="9.079763911s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:34.585353837 +0000 UTC m=+865.411916658" lastFinishedPulling="2026-01-28 18:27:41.38190278 +0000 UTC m=+872.208465611" observedRunningTime="2026-01-28 18:27:42.066326252 +0000 UTC m=+872.892889083" watchObservedRunningTime="2026-01-28 18:27:42.079763911 +0000 UTC m=+872.906326742" Jan 28 18:27:53 crc kubenswrapper[4985]: I0128 18:27:53.803446 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:53 crc kubenswrapper[4985]: I0128 18:27:53.884082 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:53 crc kubenswrapper[4985]: I0128 18:27:53.971842 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:54 crc kubenswrapper[4985]: I0128 18:27:54.822665 4985 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 28 18:27:54 crc kubenswrapper[4985]: I0128 18:27:54.822944 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e322915e-933c-4de4-98dd-ef047ee5b056" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:27:54 crc kubenswrapper[4985]: I0128 18:27:54.865293 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:54 crc kubenswrapper[4985]: I0128 18:27:54.938076 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:28:04 crc kubenswrapper[4985]: I0128 18:28:04.827651 4985 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 28 18:28:04 crc kubenswrapper[4985]: I0128 18:28:04.828390 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e322915e-933c-4de4-98dd-ef047ee5b056" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:28:14 crc kubenswrapper[4985]: I0128 18:28:14.825728 4985 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 28 18:28:14 crc kubenswrapper[4985]: I0128 18:28:14.826322 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e322915e-933c-4de4-98dd-ef047ee5b056" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:28:24 crc kubenswrapper[4985]: I0128 18:28:24.823475 4985 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 28 18:28:24 crc kubenswrapper[4985]: I0128 18:28:24.823990 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e322915e-933c-4de4-98dd-ef047ee5b056" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:28:34 crc kubenswrapper[4985]: I0128 18:28:34.823987 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.589954 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-pg6pj"] Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.591777 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.596041 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.596901 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.597322 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.597878 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-wm86f" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.598156 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.612745 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.671641 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-pg6pj"] Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.678077 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-pg6pj"] Jan 28 18:28:51 crc kubenswrapper[4985]: E0128 18:28:51.678821 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-nk5b9 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-pg6pj" podUID="3783738c-5aae-44e2-8406-47ac21968731" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759457 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3783738c-5aae-44e2-8406-47ac21968731-tmp\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759523 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-trusted-ca\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759557 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3783738c-5aae-44e2-8406-47ac21968731-datadir\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759596 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk5b9\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-kube-api-access-nk5b9\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759619 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config-openshift-service-cacrt\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759679 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-sa-token\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759702 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-token\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759736 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-entrypoint\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759826 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759936 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862053 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3783738c-5aae-44e2-8406-47ac21968731-tmp\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862140 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-trusted-ca\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862159 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3783738c-5aae-44e2-8406-47ac21968731-datadir\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862184 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk5b9\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-kube-api-access-nk5b9\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862202 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config-openshift-service-cacrt\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862221 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862235 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-sa-token\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862267 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-token\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862289 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-entrypoint\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862291 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3783738c-5aae-44e2-8406-47ac21968731-datadir\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862312 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: E0128 18:28:51.862386 4985 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Jan 28 18:28:51 crc kubenswrapper[4985]: E0128 18:28:51.862424 4985 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Jan 28 18:28:51 crc kubenswrapper[4985]: E0128 18:28:51.862440 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics podName:3783738c-5aae-44e2-8406-47ac21968731 nodeName:}" failed. No retries permitted until 2026-01-28 18:28:52.362421271 +0000 UTC m=+943.188984092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics") pod "collector-pg6pj" (UID: "3783738c-5aae-44e2-8406-47ac21968731") : secret "collector-metrics" not found Jan 28 18:28:51 crc kubenswrapper[4985]: E0128 18:28:51.862470 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver podName:3783738c-5aae-44e2-8406-47ac21968731 nodeName:}" failed. No retries permitted until 2026-01-28 18:28:52.362459122 +0000 UTC m=+943.189021943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver") pod "collector-pg6pj" (UID: "3783738c-5aae-44e2-8406-47ac21968731") : secret "collector-syslog-receiver" not found Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.863231 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config-openshift-service-cacrt\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.863809 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-trusted-ca\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.863998 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.864955 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-entrypoint\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.870045 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-token\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.879761 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3783738c-5aae-44e2-8406-47ac21968731-tmp\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.881498 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-sa-token\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.888987 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk5b9\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-kube-api-access-nk5b9\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.370612 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.370970 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.382326 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.382411 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.588148 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.598136 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.675699 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-token\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.675762 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3783738c-5aae-44e2-8406-47ac21968731-tmp\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.675827 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config-openshift-service-cacrt\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.675863 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-trusted-ca\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.675908 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-entrypoint\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.675937 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3783738c-5aae-44e2-8406-47ac21968731-datadir\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676012 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk5b9\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-kube-api-access-nk5b9\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676050 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676082 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-sa-token\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676167 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676220 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676443 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3783738c-5aae-44e2-8406-47ac21968731-datadir" (OuterVolumeSpecName: "datadir") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676874 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676898 4985 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3783738c-5aae-44e2-8406-47ac21968731-datadir\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.677085 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config" (OuterVolumeSpecName: "config") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.677393 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.677578 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.679998 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-sa-token" (OuterVolumeSpecName: "sa-token") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.680196 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3783738c-5aae-44e2-8406-47ac21968731-tmp" (OuterVolumeSpecName: "tmp") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.680409 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-token" (OuterVolumeSpecName: "collector-token") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.681874 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics" (OuterVolumeSpecName: "metrics") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.682475 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.683105 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-kube-api-access-nk5b9" (OuterVolumeSpecName: "kube-api-access-nk5b9") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "kube-api-access-nk5b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.777969 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778010 4985 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778024 4985 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3783738c-5aae-44e2-8406-47ac21968731-tmp\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778038 4985 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778051 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778064 4985 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778078 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk5b9\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-kube-api-access-nk5b9\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778090 4985 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778100 4985 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778112 4985 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.599566 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pg6pj" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.657310 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-pg6pj"] Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.664868 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-pg6pj"] Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.675823 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-gthjs"] Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.676805 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.685856 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.686595 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-wm86f" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.686882 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.687467 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.687705 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.691061 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-gthjs"] Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.693835 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.796563 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be7250ed-2e5a-403a-abfa-f1855e86ae44-tmp\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.796630 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t9j2\" (UniqueName: \"kubernetes.io/projected/be7250ed-2e5a-403a-abfa-f1855e86ae44-kube-api-access-8t9j2\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.796687 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be7250ed-2e5a-403a-abfa-f1855e86ae44-sa-token\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.797179 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-collector-token\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.797527 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-config\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.797613 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-trusted-ca\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.797873 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-entrypoint\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.797921 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be7250ed-2e5a-403a-abfa-f1855e86ae44-datadir\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.798000 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-config-openshift-service-cacrt\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.798060 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-collector-syslog-receiver\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.798160 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-metrics\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.899799 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-config\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.899872 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-trusted-ca\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.899926 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-entrypoint\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.899952 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be7250ed-2e5a-403a-abfa-f1855e86ae44-datadir\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.899978 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-config-openshift-service-cacrt\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900009 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-collector-syslog-receiver\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900039 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-metrics\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900074 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be7250ed-2e5a-403a-abfa-f1855e86ae44-tmp\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900105 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t9j2\" (UniqueName: \"kubernetes.io/projected/be7250ed-2e5a-403a-abfa-f1855e86ae44-kube-api-access-8t9j2\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be7250ed-2e5a-403a-abfa-f1855e86ae44-sa-token\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900179 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-collector-token\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900263 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be7250ed-2e5a-403a-abfa-f1855e86ae44-datadir\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.901156 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-config-openshift-service-cacrt\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.901512 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-entrypoint\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.901609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-config\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.902172 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-trusted-ca\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.903636 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-collector-syslog-receiver\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.915823 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be7250ed-2e5a-403a-abfa-f1855e86ae44-tmp\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.915977 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-collector-token\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.916175 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-metrics\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.924419 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be7250ed-2e5a-403a-abfa-f1855e86ae44-sa-token\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.924811 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t9j2\" (UniqueName: \"kubernetes.io/projected/be7250ed-2e5a-403a-abfa-f1855e86ae44-kube-api-access-8t9j2\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:54 crc kubenswrapper[4985]: I0128 18:28:54.017339 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-gthjs" Jan 28 18:28:54 crc kubenswrapper[4985]: I0128 18:28:54.477031 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-gthjs"] Jan 28 18:28:54 crc kubenswrapper[4985]: I0128 18:28:54.610191 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-gthjs" event={"ID":"be7250ed-2e5a-403a-abfa-f1855e86ae44","Type":"ContainerStarted","Data":"00ae2f783614c06b7da308c2ab3a5a997cb9e8208f790c3fc0dbe87b680aba72"} Jan 28 18:28:55 crc kubenswrapper[4985]: I0128 18:28:55.281326 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3783738c-5aae-44e2-8406-47ac21968731" path="/var/lib/kubelet/pods/3783738c-5aae-44e2-8406-47ac21968731/volumes" Jan 28 18:29:04 crc kubenswrapper[4985]: I0128 18:29:04.693234 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-gthjs" event={"ID":"be7250ed-2e5a-403a-abfa-f1855e86ae44","Type":"ContainerStarted","Data":"5bacc122dfbc0f1572079c451f306713df7e0fed758858331828ed8721584186"} Jan 28 18:29:04 crc kubenswrapper[4985]: I0128 18:29:04.720631 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-gthjs" podStartSLOduration=2.114902166 podStartE2EDuration="11.720608325s" podCreationTimestamp="2026-01-28 18:28:53 +0000 UTC" firstStartedPulling="2026-01-28 18:28:54.48704342 +0000 UTC m=+945.313606281" lastFinishedPulling="2026-01-28 18:29:04.092749629 +0000 UTC m=+954.919312440" observedRunningTime="2026-01-28 18:29:04.716742606 +0000 UTC m=+955.543305427" watchObservedRunningTime="2026-01-28 18:29:04.720608325 +0000 UTC m=+955.547171156" Jan 28 18:29:11 crc kubenswrapper[4985]: I0128 18:29:11.185681 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:29:11 crc kubenswrapper[4985]: I0128 18:29:11.186230 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.346077 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h"] Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.347936 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.349812 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.360243 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h"] Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.404348 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.404407 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7stn\" (UniqueName: \"kubernetes.io/projected/096a6287-784c-410e-99c8-16188796d2ea-kube-api-access-s7stn\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.404673 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.505902 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.506009 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.506044 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7stn\" (UniqueName: \"kubernetes.io/projected/096a6287-784c-410e-99c8-16188796d2ea-kube-api-access-s7stn\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.507070 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.507326 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.528663 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7stn\" (UniqueName: \"kubernetes.io/projected/096a6287-784c-410e-99c8-16188796d2ea-kube-api-access-s7stn\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.674717 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:36 crc kubenswrapper[4985]: I0128 18:29:36.549649 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h"] Jan 28 18:29:36 crc kubenswrapper[4985]: I0128 18:29:36.945149 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" event={"ID":"096a6287-784c-410e-99c8-16188796d2ea","Type":"ContainerStarted","Data":"c993290ec5ddedbf6904755238b7d5ebfa7183fc2581d162c0318393f22c9f3d"} Jan 28 18:29:39 crc kubenswrapper[4985]: I0128 18:29:39.970295 4985 generic.go:334] "Generic (PLEG): container finished" podID="096a6287-784c-410e-99c8-16188796d2ea" containerID="ef14c315d552a784bc32f0bc199fe21bbf5063004c3778e86d59511172269245" exitCode=0 Jan 28 18:29:39 crc kubenswrapper[4985]: I0128 18:29:39.970476 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" event={"ID":"096a6287-784c-410e-99c8-16188796d2ea","Type":"ContainerDied","Data":"ef14c315d552a784bc32f0bc199fe21bbf5063004c3778e86d59511172269245"} Jan 28 18:29:39 crc kubenswrapper[4985]: I0128 18:29:39.973155 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.102999 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sg7xt"] Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.104824 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.121870 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg7xt"] Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.220601 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pstng\" (UniqueName: \"kubernetes.io/projected/444d0c9f-09e7-49e1-9f49-6653d2f9befa-kube-api-access-pstng\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.220861 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-utilities\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.220966 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-catalog-content\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.322737 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pstng\" (UniqueName: \"kubernetes.io/projected/444d0c9f-09e7-49e1-9f49-6653d2f9befa-kube-api-access-pstng\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.322799 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-utilities\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.322844 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-catalog-content\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.323505 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-utilities\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.323546 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-catalog-content\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.343236 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pstng\" (UniqueName: \"kubernetes.io/projected/444d0c9f-09e7-49e1-9f49-6653d2f9befa-kube-api-access-pstng\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.421321 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.863865 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg7xt"] Jan 28 18:29:40 crc kubenswrapper[4985]: W0128 18:29:40.868964 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod444d0c9f_09e7_49e1_9f49_6653d2f9befa.slice/crio-f5879c7c7a742df197b5811ff0ab172c046acd6e80827906a012312347cce0ba WatchSource:0}: Error finding container f5879c7c7a742df197b5811ff0ab172c046acd6e80827906a012312347cce0ba: Status 404 returned error can't find the container with id f5879c7c7a742df197b5811ff0ab172c046acd6e80827906a012312347cce0ba Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.979371 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg7xt" event={"ID":"444d0c9f-09e7-49e1-9f49-6653d2f9befa","Type":"ContainerStarted","Data":"f5879c7c7a742df197b5811ff0ab172c046acd6e80827906a012312347cce0ba"} Jan 28 18:29:41 crc kubenswrapper[4985]: I0128 18:29:41.186469 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:29:41 crc kubenswrapper[4985]: I0128 18:29:41.186796 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:29:41 crc kubenswrapper[4985]: I0128 18:29:41.993083 4985 generic.go:334] "Generic (PLEG): container finished" podID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerID="bb7920b691aef048a369de5325cb19e6651ee07d08167e9693f136f8fd27957f" exitCode=0 Jan 28 18:29:41 crc kubenswrapper[4985]: I0128 18:29:41.993156 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg7xt" event={"ID":"444d0c9f-09e7-49e1-9f49-6653d2f9befa","Type":"ContainerDied","Data":"bb7920b691aef048a369de5325cb19e6651ee07d08167e9693f136f8fd27957f"} Jan 28 18:29:49 crc kubenswrapper[4985]: I0128 18:29:49.048515 4985 generic.go:334] "Generic (PLEG): container finished" podID="096a6287-784c-410e-99c8-16188796d2ea" containerID="7229b8e58e9f7d6a84deea35c60f4407e557d28ea8eff0884b1dd6a2760ecd69" exitCode=0 Jan 28 18:29:49 crc kubenswrapper[4985]: I0128 18:29:49.049154 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" event={"ID":"096a6287-784c-410e-99c8-16188796d2ea","Type":"ContainerDied","Data":"7229b8e58e9f7d6a84deea35c60f4407e557d28ea8eff0884b1dd6a2760ecd69"} Jan 28 18:29:49 crc kubenswrapper[4985]: I0128 18:29:49.055148 4985 generic.go:334] "Generic (PLEG): container finished" podID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerID="3b213516d9dcfab58c762cfeccdff8a6d947fb73a1b523f5d00aca85cbafab8e" exitCode=0 Jan 28 18:29:49 crc kubenswrapper[4985]: I0128 18:29:49.055505 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg7xt" event={"ID":"444d0c9f-09e7-49e1-9f49-6653d2f9befa","Type":"ContainerDied","Data":"3b213516d9dcfab58c762cfeccdff8a6d947fb73a1b523f5d00aca85cbafab8e"} Jan 28 18:29:50 crc kubenswrapper[4985]: I0128 18:29:50.064974 4985 generic.go:334] "Generic (PLEG): container finished" podID="096a6287-784c-410e-99c8-16188796d2ea" containerID="c1666e69c07f5a48bd38aebe27db263382fb3f97bfc9d5c4f5eba14abbf0aecd" exitCode=0 Jan 28 18:29:50 crc kubenswrapper[4985]: I0128 18:29:50.065033 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" event={"ID":"096a6287-784c-410e-99c8-16188796d2ea","Type":"ContainerDied","Data":"c1666e69c07f5a48bd38aebe27db263382fb3f97bfc9d5c4f5eba14abbf0aecd"} Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.073647 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg7xt" event={"ID":"444d0c9f-09e7-49e1-9f49-6653d2f9befa","Type":"ContainerStarted","Data":"8ad35cae803c470b7bc04f9fe7daa14220aef328cfcdca241aca2cc4781de99e"} Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.090203 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sg7xt" podStartSLOduration=3.062685139 podStartE2EDuration="11.090189284s" podCreationTimestamp="2026-01-28 18:29:40 +0000 UTC" firstStartedPulling="2026-01-28 18:29:42.1679832 +0000 UTC m=+992.994546021" lastFinishedPulling="2026-01-28 18:29:50.195487345 +0000 UTC m=+1001.022050166" observedRunningTime="2026-01-28 18:29:51.088034223 +0000 UTC m=+1001.914597044" watchObservedRunningTime="2026-01-28 18:29:51.090189284 +0000 UTC m=+1001.916752105" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.378765 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.426153 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-bundle\") pod \"096a6287-784c-410e-99c8-16188796d2ea\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.426265 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-util\") pod \"096a6287-784c-410e-99c8-16188796d2ea\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.426326 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7stn\" (UniqueName: \"kubernetes.io/projected/096a6287-784c-410e-99c8-16188796d2ea-kube-api-access-s7stn\") pod \"096a6287-784c-410e-99c8-16188796d2ea\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.427926 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-bundle" (OuterVolumeSpecName: "bundle") pod "096a6287-784c-410e-99c8-16188796d2ea" (UID: "096a6287-784c-410e-99c8-16188796d2ea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.432839 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096a6287-784c-410e-99c8-16188796d2ea-kube-api-access-s7stn" (OuterVolumeSpecName: "kube-api-access-s7stn") pod "096a6287-784c-410e-99c8-16188796d2ea" (UID: "096a6287-784c-410e-99c8-16188796d2ea"). InnerVolumeSpecName "kube-api-access-s7stn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.442334 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-util" (OuterVolumeSpecName: "util") pod "096a6287-784c-410e-99c8-16188796d2ea" (UID: "096a6287-784c-410e-99c8-16188796d2ea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.528433 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.528473 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7stn\" (UniqueName: \"kubernetes.io/projected/096a6287-784c-410e-99c8-16188796d2ea-kube-api-access-s7stn\") on node \"crc\" DevicePath \"\"" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.528488 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:29:52 crc kubenswrapper[4985]: I0128 18:29:52.083989 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" event={"ID":"096a6287-784c-410e-99c8-16188796d2ea","Type":"ContainerDied","Data":"c993290ec5ddedbf6904755238b7d5ebfa7183fc2581d162c0318393f22c9f3d"} Jan 28 18:29:52 crc kubenswrapper[4985]: I0128 18:29:52.084028 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:52 crc kubenswrapper[4985]: I0128 18:29:52.084051 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c993290ec5ddedbf6904755238b7d5ebfa7183fc2581d162c0318393f22c9f3d" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.807318 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ztr6n"] Jan 28 18:29:54 crc kubenswrapper[4985]: E0128 18:29:54.807921 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="util" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.807934 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="util" Jan 28 18:29:54 crc kubenswrapper[4985]: E0128 18:29:54.807946 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="extract" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.807953 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="extract" Jan 28 18:29:54 crc kubenswrapper[4985]: E0128 18:29:54.807965 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="pull" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.807974 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="pull" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.808139 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="extract" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.808709 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.811236 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.811234 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ql7gj" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.811321 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.828718 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ztr6n"] Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.881316 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn4ff\" (UniqueName: \"kubernetes.io/projected/e130755a-0d4d-4efd-a08a-a3bda72ff4cf-kube-api-access-fn4ff\") pod \"nmstate-operator-646758c888-ztr6n\" (UID: \"e130755a-0d4d-4efd-a08a-a3bda72ff4cf\") " pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.983496 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn4ff\" (UniqueName: \"kubernetes.io/projected/e130755a-0d4d-4efd-a08a-a3bda72ff4cf-kube-api-access-fn4ff\") pod \"nmstate-operator-646758c888-ztr6n\" (UID: \"e130755a-0d4d-4efd-a08a-a3bda72ff4cf\") " pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" Jan 28 18:29:55 crc kubenswrapper[4985]: I0128 18:29:55.010867 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn4ff\" (UniqueName: \"kubernetes.io/projected/e130755a-0d4d-4efd-a08a-a3bda72ff4cf-kube-api-access-fn4ff\") pod \"nmstate-operator-646758c888-ztr6n\" (UID: \"e130755a-0d4d-4efd-a08a-a3bda72ff4cf\") " pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" Jan 28 18:29:55 crc kubenswrapper[4985]: I0128 18:29:55.126756 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" Jan 28 18:29:55 crc kubenswrapper[4985]: I0128 18:29:55.358236 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ztr6n"] Jan 28 18:29:56 crc kubenswrapper[4985]: I0128 18:29:56.115115 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" event={"ID":"e130755a-0d4d-4efd-a08a-a3bda72ff4cf","Type":"ContainerStarted","Data":"0b08347245eeb190ecdac216e6201c9e8dfda0ca2b3c9c7a046d047f32958d75"} Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.569854 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7sz6k"] Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.575555 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.590391 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sz6k"] Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.643908 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-catalog-content\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.644101 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5mv\" (UniqueName: \"kubernetes.io/projected/07c652ff-94af-4252-802d-06c695e40bfb-kube-api-access-zq5mv\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.644145 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-utilities\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.745824 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-utilities\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.745911 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-catalog-content\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.745999 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq5mv\" (UniqueName: \"kubernetes.io/projected/07c652ff-94af-4252-802d-06c695e40bfb-kube-api-access-zq5mv\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.746386 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-catalog-content\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.746606 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-utilities\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.764161 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq5mv\" (UniqueName: \"kubernetes.io/projected/07c652ff-94af-4252-802d-06c695e40bfb-kube-api-access-zq5mv\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.900587 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:59 crc kubenswrapper[4985]: I0128 18:29:59.413139 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sz6k"] Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.147587 4985 generic.go:334] "Generic (PLEG): container finished" podID="07c652ff-94af-4252-802d-06c695e40bfb" containerID="f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8" exitCode=0 Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.147643 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerDied","Data":"f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8"} Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.147866 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerStarted","Data":"cd8f4c0b360f8a01b98642a24d5480d1d28c8d20e2ef03104e449bd3d4e18f02"} Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.156912 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" event={"ID":"e130755a-0d4d-4efd-a08a-a3bda72ff4cf","Type":"ContainerStarted","Data":"e3fd21fb465a6ac7055f72a90b6622ed66f483ee3e1aacc8f27bac8a9f8abea1"} Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.161822 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm"] Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.163211 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.165144 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.165894 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.217180 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm"] Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.238321 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" podStartSLOduration=2.322589297 podStartE2EDuration="6.238307177s" podCreationTimestamp="2026-01-28 18:29:54 +0000 UTC" firstStartedPulling="2026-01-28 18:29:55.363322815 +0000 UTC m=+1006.189885636" lastFinishedPulling="2026-01-28 18:29:59.279040705 +0000 UTC m=+1010.105603516" observedRunningTime="2026-01-28 18:30:00.23697769 +0000 UTC m=+1011.063540511" watchObservedRunningTime="2026-01-28 18:30:00.238307177 +0000 UTC m=+1011.064869998" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.288647 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfca2781-d8d0-4e7e-85c8-d337780059ae-config-volume\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.288747 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p4d2\" (UniqueName: \"kubernetes.io/projected/dfca2781-d8d0-4e7e-85c8-d337780059ae-kube-api-access-2p4d2\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.288780 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfca2781-d8d0-4e7e-85c8-d337780059ae-secret-volume\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.390355 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfca2781-d8d0-4e7e-85c8-d337780059ae-config-volume\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.390445 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p4d2\" (UniqueName: \"kubernetes.io/projected/dfca2781-d8d0-4e7e-85c8-d337780059ae-kube-api-access-2p4d2\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.390512 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfca2781-d8d0-4e7e-85c8-d337780059ae-secret-volume\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.393320 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfca2781-d8d0-4e7e-85c8-d337780059ae-config-volume\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.400822 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfca2781-d8d0-4e7e-85c8-d337780059ae-secret-volume\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.428058 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.428114 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.434703 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p4d2\" (UniqueName: \"kubernetes.io/projected/dfca2781-d8d0-4e7e-85c8-d337780059ae-kube-api-access-2p4d2\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.469324 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.479670 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: W0128 18:30:00.935185 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfca2781_d8d0_4e7e_85c8_d337780059ae.slice/crio-7a8f55fdc601e2cd57f9ab43e7e0a4b1295038583d07418861e2f6a2c180d56b WatchSource:0}: Error finding container 7a8f55fdc601e2cd57f9ab43e7e0a4b1295038583d07418861e2f6a2c180d56b: Status 404 returned error can't find the container with id 7a8f55fdc601e2cd57f9ab43e7e0a4b1295038583d07418861e2f6a2c180d56b Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.944342 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm"] Jan 28 18:30:01 crc kubenswrapper[4985]: I0128 18:30:01.163997 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" event={"ID":"dfca2781-d8d0-4e7e-85c8-d337780059ae","Type":"ContainerStarted","Data":"7a8f55fdc601e2cd57f9ab43e7e0a4b1295038583d07418861e2f6a2c180d56b"} Jan 28 18:30:01 crc kubenswrapper[4985]: I0128 18:30:01.212298 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.174311 4985 generic.go:334] "Generic (PLEG): container finished" podID="dfca2781-d8d0-4e7e-85c8-d337780059ae" containerID="0f1e952a6fa49b7083594207d25422769b2776c1aec196aa97dc536dd6123d3e" exitCode=0 Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.174355 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" event={"ID":"dfca2781-d8d0-4e7e-85c8-d337780059ae","Type":"ContainerDied","Data":"0f1e952a6fa49b7083594207d25422769b2776c1aec196aa97dc536dd6123d3e"} Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.939451 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w"] Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.940714 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.943001 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hjdn7" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.943204 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.961684 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-gkjzc"] Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.962943 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.968738 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kl8j\" (UniqueName: \"kubernetes.io/projected/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-kube-api-access-4kl8j\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.968869 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.974976 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w"] Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.976733 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vznlg"] Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.977928 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.002104 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vznlg"] Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071337 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x7rg\" (UniqueName: \"kubernetes.io/projected/05eeb2e4-510c-4b66-addf-efaddce8cfb0-kube-api-access-2x7rg\") pod \"nmstate-metrics-54757c584b-vznlg\" (UID: \"05eeb2e4-510c-4b66-addf-efaddce8cfb0\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071423 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071457 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bj2k\" (UniqueName: \"kubernetes.io/projected/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-kube-api-access-4bj2k\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071482 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-nmstate-lock\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071514 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-dbus-socket\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071579 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kl8j\" (UniqueName: \"kubernetes.io/projected/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-kube-api-access-4kl8j\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071645 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-ovs-socket\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: E0128 18:30:03.071878 4985 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 28 18:30:03 crc kubenswrapper[4985]: E0128 18:30:03.072025 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-tls-key-pair podName:645ec0ef-97a6-4e2f-b691-ffcbcab4eed7 nodeName:}" failed. No retries permitted until 2026-01-28 18:30:03.571999689 +0000 UTC m=+1014.398562510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-jrf9w" (UID: "645ec0ef-97a6-4e2f-b691-ffcbcab4eed7") : secret "openshift-nmstate-webhook" not found Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.102888 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn"] Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.109666 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kl8j\" (UniqueName: \"kubernetes.io/projected/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-kube-api-access-4kl8j\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.119934 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.125884 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.126183 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nsd86" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.126229 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.127238 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn"] Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.172735 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6b5v\" (UniqueName: \"kubernetes.io/projected/b866e710-8894-47da-9251-4118fec613bd-kube-api-access-d6b5v\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.172834 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-ovs-socket\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.172868 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b866e710-8894-47da-9251-4118fec613bd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.172905 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x7rg\" (UniqueName: \"kubernetes.io/projected/05eeb2e4-510c-4b66-addf-efaddce8cfb0-kube-api-access-2x7rg\") pod \"nmstate-metrics-54757c584b-vznlg\" (UID: \"05eeb2e4-510c-4b66-addf-efaddce8cfb0\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.172941 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b866e710-8894-47da-9251-4118fec613bd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.172987 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bj2k\" (UniqueName: \"kubernetes.io/projected/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-kube-api-access-4bj2k\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.173023 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-nmstate-lock\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.173047 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-dbus-socket\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.173410 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-dbus-socket\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.173471 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-ovs-socket\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.173876 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-nmstate-lock\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.193011 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerStarted","Data":"5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a"} Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.195917 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x7rg\" (UniqueName: \"kubernetes.io/projected/05eeb2e4-510c-4b66-addf-efaddce8cfb0-kube-api-access-2x7rg\") pod \"nmstate-metrics-54757c584b-vznlg\" (UID: \"05eeb2e4-510c-4b66-addf-efaddce8cfb0\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.203178 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bj2k\" (UniqueName: \"kubernetes.io/projected/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-kube-api-access-4bj2k\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.275986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b866e710-8894-47da-9251-4118fec613bd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.276074 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6b5v\" (UniqueName: \"kubernetes.io/projected/b866e710-8894-47da-9251-4118fec613bd-kube-api-access-d6b5v\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.276141 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b866e710-8894-47da-9251-4118fec613bd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.276974 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b866e710-8894-47da-9251-4118fec613bd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: E0128 18:30:03.277054 4985 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 28 18:30:03 crc kubenswrapper[4985]: E0128 18:30:03.277093 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b866e710-8894-47da-9251-4118fec613bd-plugin-serving-cert podName:b866e710-8894-47da-9251-4118fec613bd nodeName:}" failed. No retries permitted until 2026-01-28 18:30:03.777079929 +0000 UTC m=+1014.603642750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/b866e710-8894-47da-9251-4118fec613bd-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-slwkn" (UID: "b866e710-8894-47da-9251-4118fec613bd") : secret "plugin-serving-cert" not found Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.306381 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6b5v\" (UniqueName: \"kubernetes.io/projected/b866e710-8894-47da-9251-4118fec613bd-kube-api-access-d6b5v\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.329956 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-64878fb8f-ljltp"] Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.330938 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.360713 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.362586 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.370999 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64878fb8f-ljltp"] Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386227 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-oauth-serving-cert\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386355 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-trusted-ca-bundle\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386448 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-oauth-config\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386498 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-serving-cert\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386530 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpv67\" (UniqueName: \"kubernetes.io/projected/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-kube-api-access-cpv67\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386636 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-config\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386656 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-service-ca\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.497395 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-oauth-serving-cert\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.497764 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-trusted-ca-bundle\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.497870 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-oauth-config\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.497937 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-serving-cert\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.497961 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpv67\" (UniqueName: \"kubernetes.io/projected/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-kube-api-access-cpv67\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.498600 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-oauth-serving-cert\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.498666 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-config\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.498692 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-service-ca\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.499545 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-config\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.499642 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-trusted-ca-bundle\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.501081 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-service-ca\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.506233 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-oauth-config\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.508789 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-serving-cert\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.521449 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpv67\" (UniqueName: \"kubernetes.io/projected/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-kube-api-access-cpv67\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.585858 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.599831 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.619895 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.626934 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.662416 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.728650 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p4d2\" (UniqueName: \"kubernetes.io/projected/dfca2781-d8d0-4e7e-85c8-d337780059ae-kube-api-access-2p4d2\") pod \"dfca2781-d8d0-4e7e-85c8-d337780059ae\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.728841 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfca2781-d8d0-4e7e-85c8-d337780059ae-config-volume\") pod \"dfca2781-d8d0-4e7e-85c8-d337780059ae\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.728985 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfca2781-d8d0-4e7e-85c8-d337780059ae-secret-volume\") pod \"dfca2781-d8d0-4e7e-85c8-d337780059ae\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.731965 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfca2781-d8d0-4e7e-85c8-d337780059ae-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dfca2781-d8d0-4e7e-85c8-d337780059ae" (UID: "dfca2781-d8d0-4e7e-85c8-d337780059ae"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.732669 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfca2781-d8d0-4e7e-85c8-d337780059ae-kube-api-access-2p4d2" (OuterVolumeSpecName: "kube-api-access-2p4d2") pod "dfca2781-d8d0-4e7e-85c8-d337780059ae" (UID: "dfca2781-d8d0-4e7e-85c8-d337780059ae"). InnerVolumeSpecName "kube-api-access-2p4d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.739244 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfca2781-d8d0-4e7e-85c8-d337780059ae-config-volume" (OuterVolumeSpecName: "config-volume") pod "dfca2781-d8d0-4e7e-85c8-d337780059ae" (UID: "dfca2781-d8d0-4e7e-85c8-d337780059ae"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.768550 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg7xt"] Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.768870 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sg7xt" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="registry-server" containerID="cri-o://8ad35cae803c470b7bc04f9fe7daa14220aef328cfcdca241aca2cc4781de99e" gracePeriod=2 Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.830461 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b866e710-8894-47da-9251-4118fec613bd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.830583 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfca2781-d8d0-4e7e-85c8-d337780059ae-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.830601 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p4d2\" (UniqueName: \"kubernetes.io/projected/dfca2781-d8d0-4e7e-85c8-d337780059ae-kube-api-access-2p4d2\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.830612 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfca2781-d8d0-4e7e-85c8-d337780059ae-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.835211 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b866e710-8894-47da-9251-4118fec613bd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.971059 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vznlg"] Jan 28 18:30:03 crc kubenswrapper[4985]: W0128 18:30:03.987297 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05eeb2e4_510c_4b66_addf_efaddce8cfb0.slice/crio-e6711aa662b53f9a3c008ffb37df7827502e2fc6bed414fa5ba198cfb203da84 WatchSource:0}: Error finding container e6711aa662b53f9a3c008ffb37df7827502e2fc6bed414fa5ba198cfb203da84: Status 404 returned error can't find the container with id e6711aa662b53f9a3c008ffb37df7827502e2fc6bed414fa5ba198cfb203da84 Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.058027 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.170007 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w"] Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.181619 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64878fb8f-ljltp"] Jan 28 18:30:04 crc kubenswrapper[4985]: W0128 18:30:04.195042 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d2b3a75_cb2e_41a2_9005_a72a8aebb818.slice/crio-5a102b8490fbf118bf29ead080a5a651f553a5218e77ce9190605ec1fabffe5e WatchSource:0}: Error finding container 5a102b8490fbf118bf29ead080a5a651f553a5218e77ce9190605ec1fabffe5e: Status 404 returned error can't find the container with id 5a102b8490fbf118bf29ead080a5a651f553a5218e77ce9190605ec1fabffe5e Jan 28 18:30:04 crc kubenswrapper[4985]: W0128 18:30:04.195544 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod645ec0ef_97a6_4e2f_b691_ffcbcab4eed7.slice/crio-530b29cee5f7f5a8a342bb33ce184ad39ef8654ff8359f430cccd5a4e812116f WatchSource:0}: Error finding container 530b29cee5f7f5a8a342bb33ce184ad39ef8654ff8359f430cccd5a4e812116f: Status 404 returned error can't find the container with id 530b29cee5f7f5a8a342bb33ce184ad39ef8654ff8359f430cccd5a4e812116f Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.218940 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.220950 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" event={"ID":"dfca2781-d8d0-4e7e-85c8-d337780059ae","Type":"ContainerDied","Data":"7a8f55fdc601e2cd57f9ab43e7e0a4b1295038583d07418861e2f6a2c180d56b"} Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.221010 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8f55fdc601e2cd57f9ab43e7e0a4b1295038583d07418861e2f6a2c180d56b" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.230328 4985 generic.go:334] "Generic (PLEG): container finished" podID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerID="8ad35cae803c470b7bc04f9fe7daa14220aef328cfcdca241aca2cc4781de99e" exitCode=0 Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.230404 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg7xt" event={"ID":"444d0c9f-09e7-49e1-9f49-6653d2f9befa","Type":"ContainerDied","Data":"8ad35cae803c470b7bc04f9fe7daa14220aef328cfcdca241aca2cc4781de99e"} Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.231623 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gkjzc" event={"ID":"8f0319d2-9602-42b4-a3fb-c53bf5d3c244","Type":"ContainerStarted","Data":"55a9a2e0be146cd8425f05f9bf9091b12c0dcc737731c765ee5c74965d814b6b"} Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.232735 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" event={"ID":"05eeb2e4-510c-4b66-addf-efaddce8cfb0","Type":"ContainerStarted","Data":"e6711aa662b53f9a3c008ffb37df7827502e2fc6bed414fa5ba198cfb203da84"} Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.235724 4985 generic.go:334] "Generic (PLEG): container finished" podID="07c652ff-94af-4252-802d-06c695e40bfb" containerID="5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a" exitCode=0 Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.235770 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerDied","Data":"5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a"} Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.273808 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.349129 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-utilities\") pod \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.349544 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pstng\" (UniqueName: \"kubernetes.io/projected/444d0c9f-09e7-49e1-9f49-6653d2f9befa-kube-api-access-pstng\") pod \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.349713 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-catalog-content\") pod \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.352012 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-utilities" (OuterVolumeSpecName: "utilities") pod "444d0c9f-09e7-49e1-9f49-6653d2f9befa" (UID: "444d0c9f-09e7-49e1-9f49-6653d2f9befa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.357958 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444d0c9f-09e7-49e1-9f49-6653d2f9befa-kube-api-access-pstng" (OuterVolumeSpecName: "kube-api-access-pstng") pod "444d0c9f-09e7-49e1-9f49-6653d2f9befa" (UID: "444d0c9f-09e7-49e1-9f49-6653d2f9befa"). InnerVolumeSpecName "kube-api-access-pstng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.376392 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "444d0c9f-09e7-49e1-9f49-6653d2f9befa" (UID: "444d0c9f-09e7-49e1-9f49-6653d2f9befa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.452939 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.453018 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.453086 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pstng\" (UniqueName: \"kubernetes.io/projected/444d0c9f-09e7-49e1-9f49-6653d2f9befa-kube-api-access-pstng\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.566991 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn"] Jan 28 18:30:04 crc kubenswrapper[4985]: W0128 18:30:04.570914 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb866e710_8894_47da_9251_4118fec613bd.slice/crio-08ac7aec2af4f6f7dcd626d2d1da9fe5dce4d37eb1ad61ba3d4fb0bbe11f2a0d WatchSource:0}: Error finding container 08ac7aec2af4f6f7dcd626d2d1da9fe5dce4d37eb1ad61ba3d4fb0bbe11f2a0d: Status 404 returned error can't find the container with id 08ac7aec2af4f6f7dcd626d2d1da9fe5dce4d37eb1ad61ba3d4fb0bbe11f2a0d Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.242938 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" event={"ID":"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7","Type":"ContainerStarted","Data":"530b29cee5f7f5a8a342bb33ce184ad39ef8654ff8359f430cccd5a4e812116f"} Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.249533 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg7xt" event={"ID":"444d0c9f-09e7-49e1-9f49-6653d2f9befa","Type":"ContainerDied","Data":"f5879c7c7a742df197b5811ff0ab172c046acd6e80827906a012312347cce0ba"} Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.249553 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.249603 4985 scope.go:117] "RemoveContainer" containerID="8ad35cae803c470b7bc04f9fe7daa14220aef328cfcdca241aca2cc4781de99e" Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.250986 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64878fb8f-ljltp" event={"ID":"0d2b3a75-cb2e-41a2-9005-a72a8aebb818","Type":"ContainerStarted","Data":"c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8"} Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.251044 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64878fb8f-ljltp" event={"ID":"0d2b3a75-cb2e-41a2-9005-a72a8aebb818","Type":"ContainerStarted","Data":"5a102b8490fbf118bf29ead080a5a651f553a5218e77ce9190605ec1fabffe5e"} Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.253276 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" event={"ID":"b866e710-8894-47da-9251-4118fec613bd","Type":"ContainerStarted","Data":"08ac7aec2af4f6f7dcd626d2d1da9fe5dce4d37eb1ad61ba3d4fb0bbe11f2a0d"} Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.257463 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerStarted","Data":"1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315"} Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.279975 4985 scope.go:117] "RemoveContainer" containerID="3b213516d9dcfab58c762cfeccdff8a6d947fb73a1b523f5d00aca85cbafab8e" Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.285806 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64878fb8f-ljltp" podStartSLOduration=2.285781718 podStartE2EDuration="2.285781718s" podCreationTimestamp="2026-01-28 18:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:30:05.269779407 +0000 UTC m=+1016.096342248" watchObservedRunningTime="2026-01-28 18:30:05.285781718 +0000 UTC m=+1016.112344559" Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.296072 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg7xt"] Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.309280 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg7xt"] Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.311914 4985 scope.go:117] "RemoveContainer" containerID="bb7920b691aef048a369de5325cb19e6651ee07d08167e9693f136f8fd27957f" Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.312093 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7sz6k" podStartSLOduration=2.639251667 podStartE2EDuration="7.312071771s" podCreationTimestamp="2026-01-28 18:29:58 +0000 UTC" firstStartedPulling="2026-01-28 18:30:00.150134198 +0000 UTC m=+1010.976697039" lastFinishedPulling="2026-01-28 18:30:04.822954322 +0000 UTC m=+1015.649517143" observedRunningTime="2026-01-28 18:30:05.308993574 +0000 UTC m=+1016.135556415" watchObservedRunningTime="2026-01-28 18:30:05.312071771 +0000 UTC m=+1016.138634592" Jan 28 18:30:07 crc kubenswrapper[4985]: I0128 18:30:07.272950 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" path="/var/lib/kubelet/pods/444d0c9f-09e7-49e1-9f49-6653d2f9befa/volumes" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.286041 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" event={"ID":"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7","Type":"ContainerStarted","Data":"6b381f3165c4388b77a018937ba97684d69b5b201d009ab83290fe218f296818"} Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.287186 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.287887 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gkjzc" event={"ID":"8f0319d2-9602-42b4-a3fb-c53bf5d3c244","Type":"ContainerStarted","Data":"14d02fbaf84ba0b3756257de3e54645c51e770acf80b650947908cdd2ff23bd5"} Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.288823 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.291016 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" event={"ID":"05eeb2e4-510c-4b66-addf-efaddce8cfb0","Type":"ContainerStarted","Data":"cd9da237246485b2ca7075506e0dcb6c08ef6571d863749756757d4a23d9c606"} Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.292759 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" event={"ID":"b866e710-8894-47da-9251-4118fec613bd","Type":"ContainerStarted","Data":"8f61ae2e19dd8ff4b74cf00847abb484ed986b7e49d0927e9f5ec4ff74395124"} Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.312837 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" podStartSLOduration=3.041194574 podStartE2EDuration="6.31281354s" podCreationTimestamp="2026-01-28 18:30:02 +0000 UTC" firstStartedPulling="2026-01-28 18:30:04.220049211 +0000 UTC m=+1015.046612032" lastFinishedPulling="2026-01-28 18:30:07.491668177 +0000 UTC m=+1018.318230998" observedRunningTime="2026-01-28 18:30:08.304475065 +0000 UTC m=+1019.131037896" watchObservedRunningTime="2026-01-28 18:30:08.31281354 +0000 UTC m=+1019.139376361" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.334349 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-gkjzc" podStartSLOduration=2.30364553 podStartE2EDuration="6.334323827s" podCreationTimestamp="2026-01-28 18:30:02 +0000 UTC" firstStartedPulling="2026-01-28 18:30:03.426644651 +0000 UTC m=+1014.253207472" lastFinishedPulling="2026-01-28 18:30:07.457322948 +0000 UTC m=+1018.283885769" observedRunningTime="2026-01-28 18:30:08.329393538 +0000 UTC m=+1019.155956359" watchObservedRunningTime="2026-01-28 18:30:08.334323827 +0000 UTC m=+1019.160886648" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.351103 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" podStartSLOduration=2.468855027 podStartE2EDuration="5.35108413s" podCreationTimestamp="2026-01-28 18:30:03 +0000 UTC" firstStartedPulling="2026-01-28 18:30:04.572980865 +0000 UTC m=+1015.399543686" lastFinishedPulling="2026-01-28 18:30:07.455209968 +0000 UTC m=+1018.281772789" observedRunningTime="2026-01-28 18:30:08.341719835 +0000 UTC m=+1019.168282656" watchObservedRunningTime="2026-01-28 18:30:08.35108413 +0000 UTC m=+1019.177646951" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.901106 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.901210 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.965586 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:30:09 crc kubenswrapper[4985]: I0128 18:30:09.347952 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:30:10 crc kubenswrapper[4985]: I0128 18:30:10.160966 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sz6k"] Jan 28 18:30:10 crc kubenswrapper[4985]: I0128 18:30:10.312150 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" event={"ID":"05eeb2e4-510c-4b66-addf-efaddce8cfb0","Type":"ContainerStarted","Data":"f552673294749f53337e4e8377ebec4b9bfdb34cb827a4f3dc0232acf5bfa0d0"} Jan 28 18:30:10 crc kubenswrapper[4985]: I0128 18:30:10.334883 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" podStartSLOduration=2.259466174 podStartE2EDuration="8.334865077s" podCreationTimestamp="2026-01-28 18:30:02 +0000 UTC" firstStartedPulling="2026-01-28 18:30:03.990516161 +0000 UTC m=+1014.817078982" lastFinishedPulling="2026-01-28 18:30:10.065915064 +0000 UTC m=+1020.892477885" observedRunningTime="2026-01-28 18:30:10.328897338 +0000 UTC m=+1021.155460159" watchObservedRunningTime="2026-01-28 18:30:10.334865077 +0000 UTC m=+1021.161427898" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.185847 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.185923 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.185982 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.186952 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"040e45270fd174720803f9ffa3b825437d4522dc625dae36be2468e03f889dab"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.187040 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://040e45270fd174720803f9ffa3b825437d4522dc625dae36be2468e03f889dab" gracePeriod=600 Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.323990 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="040e45270fd174720803f9ffa3b825437d4522dc625dae36be2468e03f889dab" exitCode=0 Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.324125 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"040e45270fd174720803f9ffa3b825437d4522dc625dae36be2468e03f889dab"} Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.324470 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7sz6k" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="registry-server" containerID="cri-o://1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315" gracePeriod=2 Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.324544 4985 scope.go:117] "RemoveContainer" containerID="adb4c0ed7f790cd18a413d636ed6bf707c0edf095d524face3ee33b0664e4ff2" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.759621 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.891459 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-catalog-content\") pod \"07c652ff-94af-4252-802d-06c695e40bfb\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.891589 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq5mv\" (UniqueName: \"kubernetes.io/projected/07c652ff-94af-4252-802d-06c695e40bfb-kube-api-access-zq5mv\") pod \"07c652ff-94af-4252-802d-06c695e40bfb\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.891686 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-utilities\") pod \"07c652ff-94af-4252-802d-06c695e40bfb\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.892589 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-utilities" (OuterVolumeSpecName: "utilities") pod "07c652ff-94af-4252-802d-06c695e40bfb" (UID: "07c652ff-94af-4252-802d-06c695e40bfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.896515 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c652ff-94af-4252-802d-06c695e40bfb-kube-api-access-zq5mv" (OuterVolumeSpecName: "kube-api-access-zq5mv") pod "07c652ff-94af-4252-802d-06c695e40bfb" (UID: "07c652ff-94af-4252-802d-06c695e40bfb"). InnerVolumeSpecName "kube-api-access-zq5mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.948102 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07c652ff-94af-4252-802d-06c695e40bfb" (UID: "07c652ff-94af-4252-802d-06c695e40bfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.993658 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.993703 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.993720 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq5mv\" (UniqueName: \"kubernetes.io/projected/07c652ff-94af-4252-802d-06c695e40bfb-kube-api-access-zq5mv\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.337722 4985 generic.go:334] "Generic (PLEG): container finished" podID="07c652ff-94af-4252-802d-06c695e40bfb" containerID="1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315" exitCode=0 Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.337813 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.337813 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerDied","Data":"1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315"} Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.337902 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerDied","Data":"cd8f4c0b360f8a01b98642a24d5480d1d28c8d20e2ef03104e449bd3d4e18f02"} Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.337927 4985 scope.go:117] "RemoveContainer" containerID="1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.341090 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"68c147e3d0c646190ed92593bf974e9555950a450b92447009beba1ebe5c7093"} Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.359078 4985 scope.go:117] "RemoveContainer" containerID="5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.393506 4985 scope.go:117] "RemoveContainer" containerID="f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.394094 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sz6k"] Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.399878 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7sz6k"] Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.413532 4985 scope.go:117] "RemoveContainer" containerID="1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315" Jan 28 18:30:12 crc kubenswrapper[4985]: E0128 18:30:12.414191 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315\": container with ID starting with 1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315 not found: ID does not exist" containerID="1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.414231 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315"} err="failed to get container status \"1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315\": rpc error: code = NotFound desc = could not find container \"1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315\": container with ID starting with 1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315 not found: ID does not exist" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.414284 4985 scope.go:117] "RemoveContainer" containerID="5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a" Jan 28 18:30:12 crc kubenswrapper[4985]: E0128 18:30:12.414756 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a\": container with ID starting with 5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a not found: ID does not exist" containerID="5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.414789 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a"} err="failed to get container status \"5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a\": rpc error: code = NotFound desc = could not find container \"5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a\": container with ID starting with 5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a not found: ID does not exist" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.414810 4985 scope.go:117] "RemoveContainer" containerID="f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8" Jan 28 18:30:12 crc kubenswrapper[4985]: E0128 18:30:12.415124 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8\": container with ID starting with f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8 not found: ID does not exist" containerID="f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.415152 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8"} err="failed to get container status \"f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8\": rpc error: code = NotFound desc = could not find container \"f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8\": container with ID starting with f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8 not found: ID does not exist" Jan 28 18:30:13 crc kubenswrapper[4985]: I0128 18:30:13.273997 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c652ff-94af-4252-802d-06c695e40bfb" path="/var/lib/kubelet/pods/07c652ff-94af-4252-802d-06c695e40bfb/volumes" Jan 28 18:30:13 crc kubenswrapper[4985]: I0128 18:30:13.403344 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:13 crc kubenswrapper[4985]: I0128 18:30:13.662914 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:13 crc kubenswrapper[4985]: I0128 18:30:13.662999 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:13 crc kubenswrapper[4985]: I0128 18:30:13.671235 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:14 crc kubenswrapper[4985]: I0128 18:30:14.360681 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:14 crc kubenswrapper[4985]: I0128 18:30:14.417083 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-cd8f6d96f-p5cf4"] Jan 28 18:30:23 crc kubenswrapper[4985]: I0128 18:30:23.637419 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:39 crc kubenswrapper[4985]: I0128 18:30:39.485556 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-cd8f6d96f-p5cf4" podUID="a056a5e7-3897-4712-960c-e0211c7b3062" containerName="console" containerID="cri-o://12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55" gracePeriod=15 Jan 28 18:30:39 crc kubenswrapper[4985]: I0128 18:30:39.661499 4985 patch_prober.go:28] interesting pod/console-cd8f6d96f-p5cf4 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.85:8443/health\": dial tcp 10.217.0.85:8443: connect: connection refused" start-of-body= Jan 28 18:30:39 crc kubenswrapper[4985]: I0128 18:30:39.661955 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-cd8f6d96f-p5cf4" podUID="a056a5e7-3897-4712-960c-e0211c7b3062" containerName="console" probeResult="failure" output="Get \"https://10.217.0.85:8443/health\": dial tcp 10.217.0.85:8443: connect: connection refused" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.124720 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw"] Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125061 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="registry-server" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125076 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="registry-server" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125114 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="extract-utilities" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125122 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="extract-utilities" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125140 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="extract-utilities" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125149 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="extract-utilities" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125178 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="extract-content" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125186 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="extract-content" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125207 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfca2781-d8d0-4e7e-85c8-d337780059ae" containerName="collect-profiles" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125215 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfca2781-d8d0-4e7e-85c8-d337780059ae" containerName="collect-profiles" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125228 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="registry-server" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125236 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="registry-server" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125272 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="extract-content" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125280 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="extract-content" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125483 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfca2781-d8d0-4e7e-85c8-d337780059ae" containerName="collect-profiles" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125500 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="registry-server" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125512 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="registry-server" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.129852 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.133682 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.134934 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw"] Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.294082 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpkcf\" (UniqueName: \"kubernetes.io/projected/9ec863bb-8b63-4362-9bc6-93c91eebec21-kube-api-access-tpkcf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.294140 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.294223 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.395867 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.395971 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpkcf\" (UniqueName: \"kubernetes.io/projected/9ec863bb-8b63-4362-9bc6-93c91eebec21-kube-api-access-tpkcf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.396017 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.396481 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.396479 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.414017 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpkcf\" (UniqueName: \"kubernetes.io/projected/9ec863bb-8b63-4362-9bc6-93c91eebec21-kube-api-access-tpkcf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.446364 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.538140 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-cd8f6d96f-p5cf4_a056a5e7-3897-4712-960c-e0211c7b3062/console/0.log" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.538205 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.588096 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-cd8f6d96f-p5cf4_a056a5e7-3897-4712-960c-e0211c7b3062/console/0.log" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.588155 4985 generic.go:334] "Generic (PLEG): container finished" podID="a056a5e7-3897-4712-960c-e0211c7b3062" containerID="12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55" exitCode=2 Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.588186 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cd8f6d96f-p5cf4" event={"ID":"a056a5e7-3897-4712-960c-e0211c7b3062","Type":"ContainerDied","Data":"12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55"} Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.588222 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cd8f6d96f-p5cf4" event={"ID":"a056a5e7-3897-4712-960c-e0211c7b3062","Type":"ContainerDied","Data":"6757ef85c9af6b8087e2bbaecccf725d4d9f1d7a4e12622260f4ddbd98525b61"} Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.588242 4985 scope.go:117] "RemoveContainer" containerID="12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.588258 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.624609 4985 scope.go:117] "RemoveContainer" containerID="12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.639212 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55\": container with ID starting with 12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55 not found: ID does not exist" containerID="12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.639324 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55"} err="failed to get container status \"12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55\": rpc error: code = NotFound desc = could not find container \"12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55\": container with ID starting with 12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55 not found: ID does not exist" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706075 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-service-ca\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706444 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-oauth-config\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706558 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-trusted-ca-bundle\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706624 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-oauth-serving-cert\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706657 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-console-config\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706682 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb29v\" (UniqueName: \"kubernetes.io/projected/a056a5e7-3897-4712-960c-e0211c7b3062-kube-api-access-vb29v\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706701 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-serving-cert\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.709003 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.710529 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-service-ca" (OuterVolumeSpecName: "service-ca") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.710877 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-console-config" (OuterVolumeSpecName: "console-config") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.711200 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.723665 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.727469 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.732392 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a056a5e7-3897-4712-960c-e0211c7b3062-kube-api-access-vb29v" (OuterVolumeSpecName: "kube-api-access-vb29v") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "kube-api-access-vb29v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808570 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808601 4985 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808636 4985 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808796 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb29v\" (UniqueName: \"kubernetes.io/projected/a056a5e7-3897-4712-960c-e0211c7b3062-kube-api-access-vb29v\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808812 4985 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808823 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808830 4985 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.917141 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-cd8f6d96f-p5cf4"] Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.923743 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-cd8f6d96f-p5cf4"] Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.953589 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw"] Jan 28 18:30:41 crc kubenswrapper[4985]: I0128 18:30:41.276469 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a056a5e7-3897-4712-960c-e0211c7b3062" path="/var/lib/kubelet/pods/a056a5e7-3897-4712-960c-e0211c7b3062/volumes" Jan 28 18:30:41 crc kubenswrapper[4985]: I0128 18:30:41.607547 4985 generic.go:334] "Generic (PLEG): container finished" podID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerID="f01564deafeadd6b998299c4c5ab42888fcd5f692a0e41851fa650ff19085772" exitCode=0 Jan 28 18:30:41 crc kubenswrapper[4985]: I0128 18:30:41.607631 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" event={"ID":"9ec863bb-8b63-4362-9bc6-93c91eebec21","Type":"ContainerDied","Data":"f01564deafeadd6b998299c4c5ab42888fcd5f692a0e41851fa650ff19085772"} Jan 28 18:30:41 crc kubenswrapper[4985]: I0128 18:30:41.607669 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" event={"ID":"9ec863bb-8b63-4362-9bc6-93c91eebec21","Type":"ContainerStarted","Data":"d862596b70179867a2d1d1607ff3f8f4ee055f5aac6c96bf0deaa7806ec19d70"} Jan 28 18:30:44 crc kubenswrapper[4985]: I0128 18:30:44.632638 4985 generic.go:334] "Generic (PLEG): container finished" podID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerID="2e43827cfcb704b295c3dc551b2d4faca86ff7e70beb4fc6babf08be4f0b6f9f" exitCode=0 Jan 28 18:30:44 crc kubenswrapper[4985]: I0128 18:30:44.632694 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" event={"ID":"9ec863bb-8b63-4362-9bc6-93c91eebec21","Type":"ContainerDied","Data":"2e43827cfcb704b295c3dc551b2d4faca86ff7e70beb4fc6babf08be4f0b6f9f"} Jan 28 18:30:45 crc kubenswrapper[4985]: I0128 18:30:45.643018 4985 generic.go:334] "Generic (PLEG): container finished" podID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerID="84b172f9348b7b34fa12131848f32c49d5d898b4bb06d7fa4c0b794dd9d81624" exitCode=0 Jan 28 18:30:45 crc kubenswrapper[4985]: I0128 18:30:45.643095 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" event={"ID":"9ec863bb-8b63-4362-9bc6-93c91eebec21","Type":"ContainerDied","Data":"84b172f9348b7b34fa12131848f32c49d5d898b4bb06d7fa4c0b794dd9d81624"} Jan 28 18:30:46 crc kubenswrapper[4985]: I0128 18:30:46.920064 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.004732 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpkcf\" (UniqueName: \"kubernetes.io/projected/9ec863bb-8b63-4362-9bc6-93c91eebec21-kube-api-access-tpkcf\") pod \"9ec863bb-8b63-4362-9bc6-93c91eebec21\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.004899 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-util\") pod \"9ec863bb-8b63-4362-9bc6-93c91eebec21\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.004957 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-bundle\") pod \"9ec863bb-8b63-4362-9bc6-93c91eebec21\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.005825 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-bundle" (OuterVolumeSpecName: "bundle") pod "9ec863bb-8b63-4362-9bc6-93c91eebec21" (UID: "9ec863bb-8b63-4362-9bc6-93c91eebec21"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.010480 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec863bb-8b63-4362-9bc6-93c91eebec21-kube-api-access-tpkcf" (OuterVolumeSpecName: "kube-api-access-tpkcf") pod "9ec863bb-8b63-4362-9bc6-93c91eebec21" (UID: "9ec863bb-8b63-4362-9bc6-93c91eebec21"). InnerVolumeSpecName "kube-api-access-tpkcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.107290 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpkcf\" (UniqueName: \"kubernetes.io/projected/9ec863bb-8b63-4362-9bc6-93c91eebec21-kube-api-access-tpkcf\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.107326 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.660015 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" event={"ID":"9ec863bb-8b63-4362-9bc6-93c91eebec21","Type":"ContainerDied","Data":"d862596b70179867a2d1d1607ff3f8f4ee055f5aac6c96bf0deaa7806ec19d70"} Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.660058 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d862596b70179867a2d1d1607ff3f8f4ee055f5aac6c96bf0deaa7806ec19d70" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.660102 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.796153 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-util" (OuterVolumeSpecName: "util") pod "9ec863bb-8b63-4362-9bc6-93c91eebec21" (UID: "9ec863bb-8b63-4362-9bc6-93c91eebec21"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.819238 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.915973 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5"] Jan 28 18:30:57 crc kubenswrapper[4985]: E0128 18:30:57.916923 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="extract" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.916938 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="extract" Jan 28 18:30:57 crc kubenswrapper[4985]: E0128 18:30:57.916988 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="pull" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.916997 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="pull" Jan 28 18:30:57 crc kubenswrapper[4985]: E0128 18:30:57.917014 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="util" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.917021 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="util" Jan 28 18:30:57 crc kubenswrapper[4985]: E0128 18:30:57.917034 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a056a5e7-3897-4712-960c-e0211c7b3062" containerName="console" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.917041 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a056a5e7-3897-4712-960c-e0211c7b3062" containerName="console" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.917221 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="extract" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.917263 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a056a5e7-3897-4712-960c-e0211c7b3062" containerName="console" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.917918 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.923906 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.923955 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.924165 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.924369 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.924460 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-cgp4v" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.941914 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5"] Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.086956 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c77a825c-f720-48a7-b74f-49b16e3ecbed-apiservice-cert\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.087234 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c77a825c-f720-48a7-b74f-49b16e3ecbed-webhook-cert\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.087361 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvbtm\" (UniqueName: \"kubernetes.io/projected/c77a825c-f720-48a7-b74f-49b16e3ecbed-kube-api-access-nvbtm\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.189115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvbtm\" (UniqueName: \"kubernetes.io/projected/c77a825c-f720-48a7-b74f-49b16e3ecbed-kube-api-access-nvbtm\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.189222 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c77a825c-f720-48a7-b74f-49b16e3ecbed-apiservice-cert\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.189283 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c77a825c-f720-48a7-b74f-49b16e3ecbed-webhook-cert\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.194978 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c77a825c-f720-48a7-b74f-49b16e3ecbed-apiservice-cert\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.195022 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c77a825c-f720-48a7-b74f-49b16e3ecbed-webhook-cert\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.213364 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvbtm\" (UniqueName: \"kubernetes.io/projected/c77a825c-f720-48a7-b74f-49b16e3ecbed-kube-api-access-nvbtm\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.238184 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.241709 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz"] Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.242767 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.245690 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-p7k28" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.246497 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.246654 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.258402 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz"] Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.395610 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bqwz\" (UniqueName: \"kubernetes.io/projected/57ef54a5-9891-4f69-9907-b726d30d4006-kube-api-access-8bqwz\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.396020 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57ef54a5-9891-4f69-9907-b726d30d4006-webhook-cert\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.396105 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57ef54a5-9891-4f69-9907-b726d30d4006-apiservice-cert\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.497821 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57ef54a5-9891-4f69-9907-b726d30d4006-apiservice-cert\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.497905 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bqwz\" (UniqueName: \"kubernetes.io/projected/57ef54a5-9891-4f69-9907-b726d30d4006-kube-api-access-8bqwz\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.498027 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57ef54a5-9891-4f69-9907-b726d30d4006-webhook-cert\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.507178 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57ef54a5-9891-4f69-9907-b726d30d4006-apiservice-cert\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.519492 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57ef54a5-9891-4f69-9907-b726d30d4006-webhook-cert\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.524563 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bqwz\" (UniqueName: \"kubernetes.io/projected/57ef54a5-9891-4f69-9907-b726d30d4006-kube-api-access-8bqwz\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.619349 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.714366 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5"] Jan 28 18:30:58 crc kubenswrapper[4985]: W0128 18:30:58.717915 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc77a825c_f720_48a7_b74f_49b16e3ecbed.slice/crio-837680f8e9df9b6ba4f1323b1f7c08a49bd0b5e7b486f31a278c00a04e1e8014 WatchSource:0}: Error finding container 837680f8e9df9b6ba4f1323b1f7c08a49bd0b5e7b486f31a278c00a04e1e8014: Status 404 returned error can't find the container with id 837680f8e9df9b6ba4f1323b1f7c08a49bd0b5e7b486f31a278c00a04e1e8014 Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.755432 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" event={"ID":"c77a825c-f720-48a7-b74f-49b16e3ecbed","Type":"ContainerStarted","Data":"837680f8e9df9b6ba4f1323b1f7c08a49bd0b5e7b486f31a278c00a04e1e8014"} Jan 28 18:30:59 crc kubenswrapper[4985]: I0128 18:30:59.098718 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz"] Jan 28 18:30:59 crc kubenswrapper[4985]: I0128 18:30:59.763435 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" event={"ID":"57ef54a5-9891-4f69-9907-b726d30d4006","Type":"ContainerStarted","Data":"92e3645c86e6c8b47b14b5900b2700375dc4f20d875058684762005ebe04f0a1"} Jan 28 18:31:04 crc kubenswrapper[4985]: I0128 18:31:04.811980 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" event={"ID":"57ef54a5-9891-4f69-9907-b726d30d4006","Type":"ContainerStarted","Data":"fdd72e77cc726ca0a1a4cf7375eda691bbda1220dee69172ff1e5101d96bbeae"} Jan 28 18:31:04 crc kubenswrapper[4985]: I0128 18:31:04.812599 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:31:04 crc kubenswrapper[4985]: I0128 18:31:04.818592 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" event={"ID":"c77a825c-f720-48a7-b74f-49b16e3ecbed","Type":"ContainerStarted","Data":"c7994e4e9289d830d3d2b83f6fe38b4798e6db43a7a5f82ef83d020e4a399d26"} Jan 28 18:31:04 crc kubenswrapper[4985]: I0128 18:31:04.818840 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:31:04 crc kubenswrapper[4985]: I0128 18:31:04.846289 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podStartSLOduration=1.9125520580000002 podStartE2EDuration="6.846271338s" podCreationTimestamp="2026-01-28 18:30:58 +0000 UTC" firstStartedPulling="2026-01-28 18:30:59.113723025 +0000 UTC m=+1069.940285846" lastFinishedPulling="2026-01-28 18:31:04.047442305 +0000 UTC m=+1074.874005126" observedRunningTime="2026-01-28 18:31:04.840814584 +0000 UTC m=+1075.667377405" watchObservedRunningTime="2026-01-28 18:31:04.846271338 +0000 UTC m=+1075.672834159" Jan 28 18:31:04 crc kubenswrapper[4985]: I0128 18:31:04.865915 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" podStartSLOduration=4.691004668 podStartE2EDuration="7.865897333s" podCreationTimestamp="2026-01-28 18:30:57 +0000 UTC" firstStartedPulling="2026-01-28 18:30:58.724735793 +0000 UTC m=+1069.551298614" lastFinishedPulling="2026-01-28 18:31:01.899628458 +0000 UTC m=+1072.726191279" observedRunningTime="2026-01-28 18:31:04.865725238 +0000 UTC m=+1075.692288059" watchObservedRunningTime="2026-01-28 18:31:04.865897333 +0000 UTC m=+1075.692460154" Jan 28 18:31:18 crc kubenswrapper[4985]: I0128 18:31:18.626588 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:31:38 crc kubenswrapper[4985]: I0128 18:31:38.243522 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.015217 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-qlsnv"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.019004 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.020598 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nmf2x" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.021051 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.021262 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.048109 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.048992 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.052761 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.074230 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.111750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-startup\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.111855 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fcqq\" (UniqueName: \"kubernetes.io/projected/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-kube-api-access-4fcqq\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.111903 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-conf\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.111948 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-metrics-certs\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.112061 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-metrics\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.112107 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-sockets\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.112141 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-reloader\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.144981 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6lq6d"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.147594 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.152423 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.156114 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.156296 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.156919 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-96452" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.169107 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-8f79k"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.170228 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.174645 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.202176 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8f79k"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217053 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-metrics-certs\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-metrics\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217150 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6ebe169-8b20-4d94-99b7-96afffcb5118-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-szgpw\" (UID: \"f6ebe169-8b20-4d94-99b7-96afffcb5118\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217169 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-sockets\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217190 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-reloader\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217441 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpstv\" (UniqueName: \"kubernetes.io/projected/f6ebe169-8b20-4d94-99b7-96afffcb5118-kube-api-access-tpstv\") pod \"frr-k8s-webhook-server-7df86c4f6c-szgpw\" (UID: \"f6ebe169-8b20-4d94-99b7-96afffcb5118\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217500 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-startup\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217571 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fcqq\" (UniqueName: \"kubernetes.io/projected/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-kube-api-access-4fcqq\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217610 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-conf\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217623 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-sockets\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217624 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-metrics\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217810 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-reloader\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.218066 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-conf\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.218288 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-startup\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.241991 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-metrics-certs\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.277668 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fcqq\" (UniqueName: \"kubernetes.io/projected/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-kube-api-access-4fcqq\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpstv\" (UniqueName: \"kubernetes.io/projected/f6ebe169-8b20-4d94-99b7-96afffcb5118-kube-api-access-tpstv\") pod \"frr-k8s-webhook-server-7df86c4f6c-szgpw\" (UID: \"f6ebe169-8b20-4d94-99b7-96afffcb5118\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322347 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322378 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-cert\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322403 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nltf\" (UniqueName: \"kubernetes.io/projected/5fd77adb-e801-4d3f-ac61-64615952aebd-kube-api-access-7nltf\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322442 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b5094b56-07e5-45db-8a13-ce7b931b861e-metallb-excludel2\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322480 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6ebe169-8b20-4d94-99b7-96afffcb5118-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-szgpw\" (UID: \"f6ebe169-8b20-4d94-99b7-96afffcb5118\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322502 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-metrics-certs\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322546 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q24vv\" (UniqueName: \"kubernetes.io/projected/b5094b56-07e5-45db-8a13-ce7b931b861e-kube-api-access-q24vv\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322608 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-metrics-certs\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.336800 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.337037 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6ebe169-8b20-4d94-99b7-96afffcb5118-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-szgpw\" (UID: \"f6ebe169-8b20-4d94-99b7-96afffcb5118\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.357979 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpstv\" (UniqueName: \"kubernetes.io/projected/f6ebe169-8b20-4d94-99b7-96afffcb5118-kube-api-access-tpstv\") pod \"frr-k8s-webhook-server-7df86c4f6c-szgpw\" (UID: \"f6ebe169-8b20-4d94-99b7-96afffcb5118\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.366608 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.423778 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-metrics-certs\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.423924 4985 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.423949 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.423991 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-metrics-certs podName:5fd77adb-e801-4d3f-ac61-64615952aebd nodeName:}" failed. No retries permitted until 2026-01-28 18:31:39.923966354 +0000 UTC m=+1110.750529175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-metrics-certs") pod "controller-6968d8fdc4-8f79k" (UID: "5fd77adb-e801-4d3f-ac61-64615952aebd") : secret "controller-certs-secret" not found Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.424013 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-cert\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.424023 4985 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.424042 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nltf\" (UniqueName: \"kubernetes.io/projected/5fd77adb-e801-4d3f-ac61-64615952aebd-kube-api-access-7nltf\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.424054 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist podName:b5094b56-07e5-45db-8a13-ce7b931b861e nodeName:}" failed. No retries permitted until 2026-01-28 18:31:39.924043637 +0000 UTC m=+1110.750606458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist") pod "speaker-6lq6d" (UID: "b5094b56-07e5-45db-8a13-ce7b931b861e") : secret "metallb-memberlist" not found Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.424082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b5094b56-07e5-45db-8a13-ce7b931b861e-metallb-excludel2\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.424119 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-metrics-certs\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.424166 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q24vv\" (UniqueName: \"kubernetes.io/projected/b5094b56-07e5-45db-8a13-ce7b931b861e-kube-api-access-q24vv\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.425235 4985 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.425324 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-metrics-certs podName:b5094b56-07e5-45db-8a13-ce7b931b861e nodeName:}" failed. No retries permitted until 2026-01-28 18:31:39.925304692 +0000 UTC m=+1110.751867593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-metrics-certs") pod "speaker-6lq6d" (UID: "b5094b56-07e5-45db-8a13-ce7b931b861e") : secret "speaker-certs-secret" not found Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.425713 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b5094b56-07e5-45db-8a13-ce7b931b861e-metallb-excludel2\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.428083 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.444066 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-cert\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.459723 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q24vv\" (UniqueName: \"kubernetes.io/projected/b5094b56-07e5-45db-8a13-ce7b931b861e-kube-api-access-q24vv\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.459934 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nltf\" (UniqueName: \"kubernetes.io/projected/5fd77adb-e801-4d3f-ac61-64615952aebd-kube-api-access-7nltf\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.898081 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.933080 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-metrics-certs\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.933226 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.933425 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-metrics-certs\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.934275 4985 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.934342 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist podName:b5094b56-07e5-45db-8a13-ce7b931b861e nodeName:}" failed. No retries permitted until 2026-01-28 18:31:40.934323783 +0000 UTC m=+1111.760886604 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist") pod "speaker-6lq6d" (UID: "b5094b56-07e5-45db-8a13-ce7b931b861e") : secret "metallb-memberlist" not found Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.939692 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-metrics-certs\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.940346 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-metrics-certs\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.084094 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.128357 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"51af1179afefa1598a904c0a9643050740148bf78a9275f20c8b2a7c055d4143"} Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.129201 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" event={"ID":"f6ebe169-8b20-4d94-99b7-96afffcb5118","Type":"ContainerStarted","Data":"f3a7bcc0197afba71a468de099c230d22868b0f1a3690964e343bed3697cbe7d"} Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.512029 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8f79k"] Jan 28 18:31:40 crc kubenswrapper[4985]: W0128 18:31:40.513632 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd77adb_e801_4d3f_ac61_64615952aebd.slice/crio-153b4702ddecb2c3c1ad63a137fc9376f7b6fd7aa8b70d51ea947711bcd2e1b0 WatchSource:0}: Error finding container 153b4702ddecb2c3c1ad63a137fc9376f7b6fd7aa8b70d51ea947711bcd2e1b0: Status 404 returned error can't find the container with id 153b4702ddecb2c3c1ad63a137fc9376f7b6fd7aa8b70d51ea947711bcd2e1b0 Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.950928 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.960741 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.962027 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6lq6d" Jan 28 18:31:41 crc kubenswrapper[4985]: I0128 18:31:41.138376 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lq6d" event={"ID":"b5094b56-07e5-45db-8a13-ce7b931b861e","Type":"ContainerStarted","Data":"7aae29377de0d10e0129a0002e20c108028714bab9d7458c2227f36aa71a23c1"} Jan 28 18:31:41 crc kubenswrapper[4985]: I0128 18:31:41.141024 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8f79k" event={"ID":"5fd77adb-e801-4d3f-ac61-64615952aebd","Type":"ContainerStarted","Data":"1dde45509cf56844f3ab6d5fbf53d0755eaead1bd66d1b74829a2f7bc7ba0d5a"} Jan 28 18:31:41 crc kubenswrapper[4985]: I0128 18:31:41.141073 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8f79k" event={"ID":"5fd77adb-e801-4d3f-ac61-64615952aebd","Type":"ContainerStarted","Data":"32a03f53581016e8458cfcf2986dfe26e5246f2793c884a5203a887cdeefb6c8"} Jan 28 18:31:41 crc kubenswrapper[4985]: I0128 18:31:41.141087 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8f79k" event={"ID":"5fd77adb-e801-4d3f-ac61-64615952aebd","Type":"ContainerStarted","Data":"153b4702ddecb2c3c1ad63a137fc9376f7b6fd7aa8b70d51ea947711bcd2e1b0"} Jan 28 18:31:41 crc kubenswrapper[4985]: I0128 18:31:41.141201 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:41 crc kubenswrapper[4985]: I0128 18:31:41.167130 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-8f79k" podStartSLOduration=2.167102738 podStartE2EDuration="2.167102738s" podCreationTimestamp="2026-01-28 18:31:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:31:41.155770008 +0000 UTC m=+1111.982332839" watchObservedRunningTime="2026-01-28 18:31:41.167102738 +0000 UTC m=+1111.993665559" Jan 28 18:31:42 crc kubenswrapper[4985]: I0128 18:31:42.158165 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lq6d" event={"ID":"b5094b56-07e5-45db-8a13-ce7b931b861e","Type":"ContainerStarted","Data":"aec67e329e28eb0bf89791a99394df8f02835ef73cc898402236bd17e3427a2f"} Jan 28 18:31:42 crc kubenswrapper[4985]: I0128 18:31:42.158512 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lq6d" event={"ID":"b5094b56-07e5-45db-8a13-ce7b931b861e","Type":"ContainerStarted","Data":"7e9f8feabc8f90d4cc467e5a3a22c744a7cb51080d65e7cc9ae61b59a79f0089"} Jan 28 18:31:42 crc kubenswrapper[4985]: I0128 18:31:42.186572 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6lq6d" podStartSLOduration=3.186540138 podStartE2EDuration="3.186540138s" podCreationTimestamp="2026-01-28 18:31:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:31:42.182539615 +0000 UTC m=+1113.009102446" watchObservedRunningTime="2026-01-28 18:31:42.186540138 +0000 UTC m=+1113.013102969" Jan 28 18:31:43 crc kubenswrapper[4985]: I0128 18:31:43.165980 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6lq6d" Jan 28 18:31:49 crc kubenswrapper[4985]: I0128 18:31:49.209780 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" event={"ID":"f6ebe169-8b20-4d94-99b7-96afffcb5118","Type":"ContainerStarted","Data":"35166b582511c0cb6470e0cf1786001c7eb41cdc45c00f7f9d0384210b660de5"} Jan 28 18:31:49 crc kubenswrapper[4985]: I0128 18:31:49.210537 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:49 crc kubenswrapper[4985]: I0128 18:31:49.212288 4985 generic.go:334] "Generic (PLEG): container finished" podID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerID="a3f390e836420052d8007a8696e14828047253fc5efd7c67ffbe37e8a32cf87f" exitCode=0 Jan 28 18:31:49 crc kubenswrapper[4985]: I0128 18:31:49.212403 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerDied","Data":"a3f390e836420052d8007a8696e14828047253fc5efd7c67ffbe37e8a32cf87f"} Jan 28 18:31:49 crc kubenswrapper[4985]: I0128 18:31:49.228821 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podStartSLOduration=1.930057504 podStartE2EDuration="10.228798867s" podCreationTimestamp="2026-01-28 18:31:39 +0000 UTC" firstStartedPulling="2026-01-28 18:31:39.91083077 +0000 UTC m=+1110.737393591" lastFinishedPulling="2026-01-28 18:31:48.209572143 +0000 UTC m=+1119.036134954" observedRunningTime="2026-01-28 18:31:49.226085491 +0000 UTC m=+1120.052648392" watchObservedRunningTime="2026-01-28 18:31:49.228798867 +0000 UTC m=+1120.055361688" Jan 28 18:31:50 crc kubenswrapper[4985]: I0128 18:31:50.087522 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:50 crc kubenswrapper[4985]: I0128 18:31:50.220202 4985 generic.go:334] "Generic (PLEG): container finished" podID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerID="7b59bc8d188cb60f10839500f4d239e4f82028acc01ea79094bf48b16d196d3f" exitCode=0 Jan 28 18:31:50 crc kubenswrapper[4985]: I0128 18:31:50.220322 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerDied","Data":"7b59bc8d188cb60f10839500f4d239e4f82028acc01ea79094bf48b16d196d3f"} Jan 28 18:31:51 crc kubenswrapper[4985]: I0128 18:31:51.229357 4985 generic.go:334] "Generic (PLEG): container finished" podID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerID="e26017e0e9bd57074a816c7ac382b620fe7b45a2283cf81b3b79d29fe6ceec1e" exitCode=0 Jan 28 18:31:51 crc kubenswrapper[4985]: I0128 18:31:51.229452 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerDied","Data":"e26017e0e9bd57074a816c7ac382b620fe7b45a2283cf81b3b79d29fe6ceec1e"} Jan 28 18:31:52 crc kubenswrapper[4985]: I0128 18:31:52.239894 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"bae530c428949b3d5d3547f623b72611b427961e6e638679792d2edab1b5d06f"} Jan 28 18:31:52 crc kubenswrapper[4985]: I0128 18:31:52.240193 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"4f6591d0d275d0078b49f74da8009d5d995a9740fb3846677a55a9876831fac8"} Jan 28 18:31:52 crc kubenswrapper[4985]: I0128 18:31:52.240205 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"a4a0bf327889a8b202f093668303cbe6c4dcf67ff2cf6693d3a23fd9a88737e1"} Jan 28 18:31:53 crc kubenswrapper[4985]: I0128 18:31:53.254130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"c9e858ad5d739a82ca8eb06dac2dc8e8d78e9ba2aed560b5b10f7c3c6331d2d3"} Jan 28 18:31:53 crc kubenswrapper[4985]: I0128 18:31:53.254455 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"5dd1e59090599b9440555f63a8837cb32977721ba8696f470d0c913549edfbc7"} Jan 28 18:31:54 crc kubenswrapper[4985]: I0128 18:31:54.264455 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"a0c445090f577133e74cd752367f1ce2754e4f088f7a54104278f9da1e09484f"} Jan 28 18:31:54 crc kubenswrapper[4985]: I0128 18:31:54.264837 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:54 crc kubenswrapper[4985]: I0128 18:31:54.287847 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-qlsnv" podStartSLOduration=7.684439962 podStartE2EDuration="16.287830276s" podCreationTimestamp="2026-01-28 18:31:38 +0000 UTC" firstStartedPulling="2026-01-28 18:31:39.629449896 +0000 UTC m=+1110.456012717" lastFinishedPulling="2026-01-28 18:31:48.23284021 +0000 UTC m=+1119.059403031" observedRunningTime="2026-01-28 18:31:54.283319428 +0000 UTC m=+1125.109882249" watchObservedRunningTime="2026-01-28 18:31:54.287830276 +0000 UTC m=+1125.114393097" Jan 28 18:31:54 crc kubenswrapper[4985]: I0128 18:31:54.338028 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:54 crc kubenswrapper[4985]: I0128 18:31:54.374640 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:59 crc kubenswrapper[4985]: I0128 18:31:59.371813 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:32:00 crc kubenswrapper[4985]: I0128 18:32:00.966863 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6lq6d" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.810402 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-847cx"] Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.811964 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.856523 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.856705 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.856858 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-l44jq" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.857406 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmp8w\" (UniqueName: \"kubernetes.io/projected/0c991bfb-875d-4aa7-b36f-08a198a36da9-kube-api-access-dmp8w\") pod \"openstack-operator-index-847cx\" (UID: \"0c991bfb-875d-4aa7-b36f-08a198a36da9\") " pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.865825 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-847cx"] Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.958985 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmp8w\" (UniqueName: \"kubernetes.io/projected/0c991bfb-875d-4aa7-b36f-08a198a36da9-kube-api-access-dmp8w\") pod \"openstack-operator-index-847cx\" (UID: \"0c991bfb-875d-4aa7-b36f-08a198a36da9\") " pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.978050 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmp8w\" (UniqueName: \"kubernetes.io/projected/0c991bfb-875d-4aa7-b36f-08a198a36da9-kube-api-access-dmp8w\") pod \"openstack-operator-index-847cx\" (UID: \"0c991bfb-875d-4aa7-b36f-08a198a36da9\") " pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:04 crc kubenswrapper[4985]: I0128 18:32:04.176161 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:04 crc kubenswrapper[4985]: I0128 18:32:04.620953 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-847cx"] Jan 28 18:32:04 crc kubenswrapper[4985]: W0128 18:32:04.625039 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c991bfb_875d_4aa7_b36f_08a198a36da9.slice/crio-6e2d00abd3058f3b2d0c276fcb7fb3da696a17ae2a6662ee220589f2fffe64b6 WatchSource:0}: Error finding container 6e2d00abd3058f3b2d0c276fcb7fb3da696a17ae2a6662ee220589f2fffe64b6: Status 404 returned error can't find the container with id 6e2d00abd3058f3b2d0c276fcb7fb3da696a17ae2a6662ee220589f2fffe64b6 Jan 28 18:32:05 crc kubenswrapper[4985]: I0128 18:32:05.384940 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-847cx" event={"ID":"0c991bfb-875d-4aa7-b36f-08a198a36da9","Type":"ContainerStarted","Data":"6e2d00abd3058f3b2d0c276fcb7fb3da696a17ae2a6662ee220589f2fffe64b6"} Jan 28 18:32:07 crc kubenswrapper[4985]: I0128 18:32:07.189122 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-847cx"] Jan 28 18:32:07 crc kubenswrapper[4985]: I0128 18:32:07.806571 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wnjfp"] Jan 28 18:32:07 crc kubenswrapper[4985]: I0128 18:32:07.808095 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:07 crc kubenswrapper[4985]: I0128 18:32:07.830454 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wnjfp"] Jan 28 18:32:07 crc kubenswrapper[4985]: I0128 18:32:07.923532 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4mhj\" (UniqueName: \"kubernetes.io/projected/3314cb32-9bb8-46fd-b28e-5a6e9b779fa7-kube-api-access-v4mhj\") pod \"openstack-operator-index-wnjfp\" (UID: \"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7\") " pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:08 crc kubenswrapper[4985]: I0128 18:32:08.025112 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4mhj\" (UniqueName: \"kubernetes.io/projected/3314cb32-9bb8-46fd-b28e-5a6e9b779fa7-kube-api-access-v4mhj\") pod \"openstack-operator-index-wnjfp\" (UID: \"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7\") " pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:08 crc kubenswrapper[4985]: I0128 18:32:08.047229 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4mhj\" (UniqueName: \"kubernetes.io/projected/3314cb32-9bb8-46fd-b28e-5a6e9b779fa7-kube-api-access-v4mhj\") pod \"openstack-operator-index-wnjfp\" (UID: \"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7\") " pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:08 crc kubenswrapper[4985]: I0128 18:32:08.136314 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:09 crc kubenswrapper[4985]: I0128 18:32:09.217365 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wnjfp"] Jan 28 18:32:09 crc kubenswrapper[4985]: I0128 18:32:09.375055 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:32:11 crc kubenswrapper[4985]: I0128 18:32:11.186508 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:32:11 crc kubenswrapper[4985]: I0128 18:32:11.187092 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:32:11 crc kubenswrapper[4985]: I0128 18:32:11.443680 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wnjfp" event={"ID":"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7","Type":"ContainerStarted","Data":"fc84769779f63e0226ec33479e7f491d14108554ee38913895f8cd0bd86864d3"} Jan 28 18:32:14 crc kubenswrapper[4985]: I0128 18:32:14.474388 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wnjfp" event={"ID":"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7","Type":"ContainerStarted","Data":"a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a"} Jan 28 18:32:14 crc kubenswrapper[4985]: I0128 18:32:14.476430 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-847cx" event={"ID":"0c991bfb-875d-4aa7-b36f-08a198a36da9","Type":"ContainerStarted","Data":"58f1f3f27d11b00a29a093ee8413d7694f67531cb4a7e3d77e5a61693b957cef"} Jan 28 18:32:14 crc kubenswrapper[4985]: I0128 18:32:14.476648 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-847cx" podUID="0c991bfb-875d-4aa7-b36f-08a198a36da9" containerName="registry-server" containerID="cri-o://58f1f3f27d11b00a29a093ee8413d7694f67531cb4a7e3d77e5a61693b957cef" gracePeriod=2 Jan 28 18:32:14 crc kubenswrapper[4985]: I0128 18:32:14.502700 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wnjfp" podStartSLOduration=4.959035482 podStartE2EDuration="7.502451459s" podCreationTimestamp="2026-01-28 18:32:07 +0000 UTC" firstStartedPulling="2026-01-28 18:32:11.040688486 +0000 UTC m=+1141.867251317" lastFinishedPulling="2026-01-28 18:32:13.584104453 +0000 UTC m=+1144.410667294" observedRunningTime="2026-01-28 18:32:14.49538576 +0000 UTC m=+1145.321948631" watchObservedRunningTime="2026-01-28 18:32:14.502451459 +0000 UTC m=+1145.329014280" Jan 28 18:32:14 crc kubenswrapper[4985]: I0128 18:32:14.519304 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-847cx" podStartSLOduration=2.566795185 podStartE2EDuration="11.519275024s" podCreationTimestamp="2026-01-28 18:32:03 +0000 UTC" firstStartedPulling="2026-01-28 18:32:04.627563899 +0000 UTC m=+1135.454126720" lastFinishedPulling="2026-01-28 18:32:13.580043728 +0000 UTC m=+1144.406606559" observedRunningTime="2026-01-28 18:32:14.516726752 +0000 UTC m=+1145.343289583" watchObservedRunningTime="2026-01-28 18:32:14.519275024 +0000 UTC m=+1145.345837885" Jan 28 18:32:15 crc kubenswrapper[4985]: I0128 18:32:15.487363 4985 generic.go:334] "Generic (PLEG): container finished" podID="0c991bfb-875d-4aa7-b36f-08a198a36da9" containerID="58f1f3f27d11b00a29a093ee8413d7694f67531cb4a7e3d77e5a61693b957cef" exitCode=0 Jan 28 18:32:15 crc kubenswrapper[4985]: I0128 18:32:15.487427 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-847cx" event={"ID":"0c991bfb-875d-4aa7-b36f-08a198a36da9","Type":"ContainerDied","Data":"58f1f3f27d11b00a29a093ee8413d7694f67531cb4a7e3d77e5a61693b957cef"} Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.122042 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.201132 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmp8w\" (UniqueName: \"kubernetes.io/projected/0c991bfb-875d-4aa7-b36f-08a198a36da9-kube-api-access-dmp8w\") pod \"0c991bfb-875d-4aa7-b36f-08a198a36da9\" (UID: \"0c991bfb-875d-4aa7-b36f-08a198a36da9\") " Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.206316 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c991bfb-875d-4aa7-b36f-08a198a36da9-kube-api-access-dmp8w" (OuterVolumeSpecName: "kube-api-access-dmp8w") pod "0c991bfb-875d-4aa7-b36f-08a198a36da9" (UID: "0c991bfb-875d-4aa7-b36f-08a198a36da9"). InnerVolumeSpecName "kube-api-access-dmp8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.303292 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmp8w\" (UniqueName: \"kubernetes.io/projected/0c991bfb-875d-4aa7-b36f-08a198a36da9-kube-api-access-dmp8w\") on node \"crc\" DevicePath \"\"" Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.495933 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-847cx" event={"ID":"0c991bfb-875d-4aa7-b36f-08a198a36da9","Type":"ContainerDied","Data":"6e2d00abd3058f3b2d0c276fcb7fb3da696a17ae2a6662ee220589f2fffe64b6"} Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.495976 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.495996 4985 scope.go:117] "RemoveContainer" containerID="58f1f3f27d11b00a29a093ee8413d7694f67531cb4a7e3d77e5a61693b957cef" Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.536516 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-847cx"] Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.542429 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-847cx"] Jan 28 18:32:17 crc kubenswrapper[4985]: I0128 18:32:17.275995 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c991bfb-875d-4aa7-b36f-08a198a36da9" path="/var/lib/kubelet/pods/0c991bfb-875d-4aa7-b36f-08a198a36da9/volumes" Jan 28 18:32:18 crc kubenswrapper[4985]: I0128 18:32:18.137303 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:18 crc kubenswrapper[4985]: I0128 18:32:18.138080 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:18 crc kubenswrapper[4985]: I0128 18:32:18.173809 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:18 crc kubenswrapper[4985]: I0128 18:32:18.547538 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.237653 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg"] Jan 28 18:32:25 crc kubenswrapper[4985]: E0128 18:32:25.238482 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c991bfb-875d-4aa7-b36f-08a198a36da9" containerName="registry-server" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.238495 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c991bfb-875d-4aa7-b36f-08a198a36da9" containerName="registry-server" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.238645 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c991bfb-875d-4aa7-b36f-08a198a36da9" containerName="registry-server" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.239689 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.246370 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-w5lcz" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.253104 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg"] Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.272416 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-util\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.272546 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw25v\" (UniqueName: \"kubernetes.io/projected/b5e9d40d-8ad9-4602-ac23-7cad303b1696-kube-api-access-gw25v\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.272600 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-bundle\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.373860 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw25v\" (UniqueName: \"kubernetes.io/projected/b5e9d40d-8ad9-4602-ac23-7cad303b1696-kube-api-access-gw25v\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.373946 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-bundle\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.374173 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-util\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.374959 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-util\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.375301 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-bundle\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.397419 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw25v\" (UniqueName: \"kubernetes.io/projected/b5e9d40d-8ad9-4602-ac23-7cad303b1696-kube-api-access-gw25v\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.574660 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:26 crc kubenswrapper[4985]: I0128 18:32:26.021391 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg"] Jan 28 18:32:26 crc kubenswrapper[4985]: I0128 18:32:26.600002 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" event={"ID":"b5e9d40d-8ad9-4602-ac23-7cad303b1696","Type":"ContainerStarted","Data":"759ff3ea70b0b4ae7d7d5bff2276f3f6400ffef8d0a0df4486bbe1ab81bdf4a8"} Jan 28 18:32:26 crc kubenswrapper[4985]: I0128 18:32:26.600049 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" event={"ID":"b5e9d40d-8ad9-4602-ac23-7cad303b1696","Type":"ContainerStarted","Data":"1d3469dcbbd2221fa466fdc12e464d9ffe30dee105f24ca5c259d7e5823c660e"} Jan 28 18:32:27 crc kubenswrapper[4985]: I0128 18:32:27.611574 4985 generic.go:334] "Generic (PLEG): container finished" podID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerID="759ff3ea70b0b4ae7d7d5bff2276f3f6400ffef8d0a0df4486bbe1ab81bdf4a8" exitCode=0 Jan 28 18:32:27 crc kubenswrapper[4985]: I0128 18:32:27.611675 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" event={"ID":"b5e9d40d-8ad9-4602-ac23-7cad303b1696","Type":"ContainerDied","Data":"759ff3ea70b0b4ae7d7d5bff2276f3f6400ffef8d0a0df4486bbe1ab81bdf4a8"} Jan 28 18:32:29 crc kubenswrapper[4985]: I0128 18:32:29.635545 4985 generic.go:334] "Generic (PLEG): container finished" podID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerID="2a1420691545df1dbfb468561eab6f368aa72604a8fa49d7c79feb86d8bfb5cc" exitCode=0 Jan 28 18:32:29 crc kubenswrapper[4985]: I0128 18:32:29.635756 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" event={"ID":"b5e9d40d-8ad9-4602-ac23-7cad303b1696","Type":"ContainerDied","Data":"2a1420691545df1dbfb468561eab6f368aa72604a8fa49d7c79feb86d8bfb5cc"} Jan 28 18:32:30 crc kubenswrapper[4985]: I0128 18:32:30.649927 4985 generic.go:334] "Generic (PLEG): container finished" podID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerID="82078f9a9ef7771cc51696c1cfd3e236e2109c92249b4c20bec63715dcc1d4ab" exitCode=0 Jan 28 18:32:30 crc kubenswrapper[4985]: I0128 18:32:30.650010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" event={"ID":"b5e9d40d-8ad9-4602-ac23-7cad303b1696","Type":"ContainerDied","Data":"82078f9a9ef7771cc51696c1cfd3e236e2109c92249b4c20bec63715dcc1d4ab"} Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.266994 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.405906 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-bundle\") pod \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.406362 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw25v\" (UniqueName: \"kubernetes.io/projected/b5e9d40d-8ad9-4602-ac23-7cad303b1696-kube-api-access-gw25v\") pod \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.406541 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-util\") pod \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.407334 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-bundle" (OuterVolumeSpecName: "bundle") pod "b5e9d40d-8ad9-4602-ac23-7cad303b1696" (UID: "b5e9d40d-8ad9-4602-ac23-7cad303b1696"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.415813 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e9d40d-8ad9-4602-ac23-7cad303b1696-kube-api-access-gw25v" (OuterVolumeSpecName: "kube-api-access-gw25v") pod "b5e9d40d-8ad9-4602-ac23-7cad303b1696" (UID: "b5e9d40d-8ad9-4602-ac23-7cad303b1696"). InnerVolumeSpecName "kube-api-access-gw25v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.429476 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-util" (OuterVolumeSpecName: "util") pod "b5e9d40d-8ad9-4602-ac23-7cad303b1696" (UID: "b5e9d40d-8ad9-4602-ac23-7cad303b1696"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.515662 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.515713 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.515736 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw25v\" (UniqueName: \"kubernetes.io/projected/b5e9d40d-8ad9-4602-ac23-7cad303b1696-kube-api-access-gw25v\") on node \"crc\" DevicePath \"\"" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.671309 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" event={"ID":"b5e9d40d-8ad9-4602-ac23-7cad303b1696","Type":"ContainerDied","Data":"1d3469dcbbd2221fa466fdc12e464d9ffe30dee105f24ca5c259d7e5823c660e"} Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.671341 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3469dcbbd2221fa466fdc12e464d9ffe30dee105f24ca5c259d7e5823c660e" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.671433 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.826547 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx"] Jan 28 18:32:37 crc kubenswrapper[4985]: E0128 18:32:37.827567 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="extract" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.827581 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="extract" Jan 28 18:32:37 crc kubenswrapper[4985]: E0128 18:32:37.827602 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="pull" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.827608 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="pull" Jan 28 18:32:37 crc kubenswrapper[4985]: E0128 18:32:37.827622 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="util" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.827628 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="util" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.827756 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="extract" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.828263 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.830850 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-flwrr" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.872183 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx"] Jan 28 18:32:38 crc kubenswrapper[4985]: I0128 18:32:38.002591 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwbt4\" (UniqueName: \"kubernetes.io/projected/82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62-kube-api-access-lwbt4\") pod \"openstack-operator-controller-init-687c66fd56-xdvhx\" (UID: \"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62\") " pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:32:38 crc kubenswrapper[4985]: I0128 18:32:38.104066 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwbt4\" (UniqueName: \"kubernetes.io/projected/82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62-kube-api-access-lwbt4\") pod \"openstack-operator-controller-init-687c66fd56-xdvhx\" (UID: \"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62\") " pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:32:38 crc kubenswrapper[4985]: I0128 18:32:38.133865 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwbt4\" (UniqueName: \"kubernetes.io/projected/82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62-kube-api-access-lwbt4\") pod \"openstack-operator-controller-init-687c66fd56-xdvhx\" (UID: \"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62\") " pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:32:38 crc kubenswrapper[4985]: I0128 18:32:38.150742 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:32:38 crc kubenswrapper[4985]: I0128 18:32:38.684221 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx"] Jan 28 18:32:38 crc kubenswrapper[4985]: I0128 18:32:38.722805 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" event={"ID":"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62","Type":"ContainerStarted","Data":"935b66526b9ec7e30d57989d97030486c3e4a2cdc4b4fecdf7789e423a532d09"} Jan 28 18:32:41 crc kubenswrapper[4985]: I0128 18:32:41.187831 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:32:41 crc kubenswrapper[4985]: I0128 18:32:41.188141 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:32:46 crc kubenswrapper[4985]: I0128 18:32:46.845756 4985 scope.go:117] "RemoveContainer" containerID="0d1f250737c643fbc85140566ed81835e3f4db2d92ec1ed36f15c0c9eb2c030a" Jan 28 18:32:51 crc kubenswrapper[4985]: I0128 18:32:51.835342 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" event={"ID":"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62","Type":"ContainerStarted","Data":"8a3f19cb6aa7abaef144114e6dd8bdb0d9b95990c08eded3c8ad0a1adc11123e"} Jan 28 18:32:51 crc kubenswrapper[4985]: I0128 18:32:51.836205 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:32:51 crc kubenswrapper[4985]: I0128 18:32:51.901862 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podStartSLOduration=2.407590585 podStartE2EDuration="14.901837167s" podCreationTimestamp="2026-01-28 18:32:37 +0000 UTC" firstStartedPulling="2026-01-28 18:32:38.691811297 +0000 UTC m=+1169.518374118" lastFinishedPulling="2026-01-28 18:32:51.186057839 +0000 UTC m=+1182.012620700" observedRunningTime="2026-01-28 18:32:51.885034813 +0000 UTC m=+1182.711597664" watchObservedRunningTime="2026-01-28 18:32:51.901837167 +0000 UTC m=+1182.728399998" Jan 28 18:32:58 crc kubenswrapper[4985]: I0128 18:32:58.154345 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.185797 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.188392 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.188553 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.189569 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68c147e3d0c646190ed92593bf974e9555950a450b92447009beba1ebe5c7093"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.189810 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://68c147e3d0c646190ed92593bf974e9555950a450b92447009beba1ebe5c7093" gracePeriod=600 Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.991667 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="68c147e3d0c646190ed92593bf974e9555950a450b92447009beba1ebe5c7093" exitCode=0 Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.991749 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"68c147e3d0c646190ed92593bf974e9555950a450b92447009beba1ebe5c7093"} Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.992013 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a"} Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.992052 4985 scope.go:117] "RemoveContainer" containerID="040e45270fd174720803f9ffa3b825437d4522dc625dae36be2468e03f889dab" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.552687 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.554599 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.556531 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-hnhrg" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.580108 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.581569 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.585090 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ndlm5" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.597158 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.605424 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6dk7\" (UniqueName: \"kubernetes.io/projected/4fa1b302-aad3-4e6e-9cd2-bba65262c1e8-kube-api-access-g6dk7\") pod \"barbican-operator-controller-manager-7f86f8796f-ww4nj\" (UID: \"4fa1b302-aad3-4e6e-9cd2-bba65262c1e8\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.608830 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.610200 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.614545 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8j87r" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.628127 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.629099 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.632802 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ndrvf" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.639563 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.640836 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.650845 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.652425 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-cmgj7" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.667023 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.686325 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.712354 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2z62\" (UniqueName: \"kubernetes.io/projected/4dfb4621-d061-4224-8aee-840726565aa3-kube-api-access-b2z62\") pod \"designate-operator-controller-manager-b45d7bf98-75d84\" (UID: \"4dfb4621-d061-4224-8aee-840726565aa3\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.712425 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2qth\" (UniqueName: \"kubernetes.io/projected/cc7f29e1-e6e0-45a0-920a-4b18d8204c65-kube-api-access-p2qth\") pod \"heat-operator-controller-manager-594c8c9d5d-fm7nr\" (UID: \"cc7f29e1-e6e0-45a0-920a-4b18d8204c65\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.712520 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkghb\" (UniqueName: \"kubernetes.io/projected/99893bb5-33ef-4159-bf8f-1c79a58e74d9-kube-api-access-xkghb\") pod \"glance-operator-controller-manager-78fdd796fd-6bdmh\" (UID: \"99893bb5-33ef-4159-bf8f-1c79a58e74d9\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.712565 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmkzq\" (UniqueName: \"kubernetes.io/projected/7ef21481-ade5-436a-ae3a-f284a7e438d3-kube-api-access-dmkzq\") pod \"cinder-operator-controller-manager-7478f7dbf9-7gfrh\" (UID: \"7ef21481-ade5-436a-ae3a-f284a7e438d3\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.712621 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6dk7\" (UniqueName: \"kubernetes.io/projected/4fa1b302-aad3-4e6e-9cd2-bba65262c1e8-kube-api-access-g6dk7\") pod \"barbican-operator-controller-manager-7f86f8796f-ww4nj\" (UID: \"4fa1b302-aad3-4e6e-9cd2-bba65262c1e8\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.742930 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.744967 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6dk7\" (UniqueName: \"kubernetes.io/projected/4fa1b302-aad3-4e6e-9cd2-bba65262c1e8-kube-api-access-g6dk7\") pod \"barbican-operator-controller-manager-7f86f8796f-ww4nj\" (UID: \"4fa1b302-aad3-4e6e-9cd2-bba65262c1e8\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.755879 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.758645 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.764957 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-pfg5x" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.789319 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.791308 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.804047 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.804240 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-j2s8q" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.819988 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkghb\" (UniqueName: \"kubernetes.io/projected/99893bb5-33ef-4159-bf8f-1c79a58e74d9-kube-api-access-xkghb\") pod \"glance-operator-controller-manager-78fdd796fd-6bdmh\" (UID: \"99893bb5-33ef-4159-bf8f-1c79a58e74d9\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.820033 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.820057 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmkzq\" (UniqueName: \"kubernetes.io/projected/7ef21481-ade5-436a-ae3a-f284a7e438d3-kube-api-access-dmkzq\") pod \"cinder-operator-controller-manager-7478f7dbf9-7gfrh\" (UID: \"7ef21481-ade5-436a-ae3a-f284a7e438d3\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.820092 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6n72\" (UniqueName: \"kubernetes.io/projected/697da6ae-2950-468c-82e9-bcb1a1af61e7-kube-api-access-b6n72\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.820169 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdqdn\" (UniqueName: \"kubernetes.io/projected/99b88683-3e0a-4afa-91ab-71feac27fba1-kube-api-access-tdqdn\") pod \"horizon-operator-controller-manager-77d5c5b54f-6skp6\" (UID: \"99b88683-3e0a-4afa-91ab-71feac27fba1\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.820203 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2z62\" (UniqueName: \"kubernetes.io/projected/4dfb4621-d061-4224-8aee-840726565aa3-kube-api-access-b2z62\") pod \"designate-operator-controller-manager-b45d7bf98-75d84\" (UID: \"4dfb4621-d061-4224-8aee-840726565aa3\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.820227 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2qth\" (UniqueName: \"kubernetes.io/projected/cc7f29e1-e6e0-45a0-920a-4b18d8204c65-kube-api-access-p2qth\") pod \"heat-operator-controller-manager-594c8c9d5d-fm7nr\" (UID: \"cc7f29e1-e6e0-45a0-920a-4b18d8204c65\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.856717 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkghb\" (UniqueName: \"kubernetes.io/projected/99893bb5-33ef-4159-bf8f-1c79a58e74d9-kube-api-access-xkghb\") pod \"glance-operator-controller-manager-78fdd796fd-6bdmh\" (UID: \"99893bb5-33ef-4159-bf8f-1c79a58e74d9\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.860996 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2qth\" (UniqueName: \"kubernetes.io/projected/cc7f29e1-e6e0-45a0-920a-4b18d8204c65-kube-api-access-p2qth\") pod \"heat-operator-controller-manager-594c8c9d5d-fm7nr\" (UID: \"cc7f29e1-e6e0-45a0-920a-4b18d8204c65\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.861087 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.862374 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.872317 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-k2q85" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.872698 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2z62\" (UniqueName: \"kubernetes.io/projected/4dfb4621-d061-4224-8aee-840726565aa3-kube-api-access-b2z62\") pod \"designate-operator-controller-manager-b45d7bf98-75d84\" (UID: \"4dfb4621-d061-4224-8aee-840726565aa3\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.872869 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmkzq\" (UniqueName: \"kubernetes.io/projected/7ef21481-ade5-436a-ae3a-f284a7e438d3-kube-api-access-dmkzq\") pod \"cinder-operator-controller-manager-7478f7dbf9-7gfrh\" (UID: \"7ef21481-ade5-436a-ae3a-f284a7e438d3\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.879983 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.884316 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.906165 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.922200 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.922522 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6n72\" (UniqueName: \"kubernetes.io/projected/697da6ae-2950-468c-82e9-bcb1a1af61e7-kube-api-access-b6n72\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.922697 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdqdn\" (UniqueName: \"kubernetes.io/projected/99b88683-3e0a-4afa-91ab-71feac27fba1-kube-api-access-tdqdn\") pod \"horizon-operator-controller-manager-77d5c5b54f-6skp6\" (UID: \"99b88683-3e0a-4afa-91ab-71feac27fba1\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.922830 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjt6k\" (UniqueName: \"kubernetes.io/projected/75e682e9-e5a5-47f1-83cc-c8004ebe224a-kube-api-access-zjt6k\") pod \"ironic-operator-controller-manager-598f7747c9-s2n6z\" (UID: \"75e682e9-e5a5-47f1-83cc-c8004ebe224a\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:33:25 crc kubenswrapper[4985]: E0128 18:33:25.923130 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:25 crc kubenswrapper[4985]: E0128 18:33:25.923310 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert podName:697da6ae-2950-468c-82e9-bcb1a1af61e7 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:26.42328407 +0000 UTC m=+1217.249846901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert") pod "infra-operator-controller-manager-694cf4f878-5zqpj" (UID: "697da6ae-2950-468c-82e9-bcb1a1af61e7") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.929557 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.930332 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.950333 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.950969 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.953466 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdqdn\" (UniqueName: \"kubernetes.io/projected/99b88683-3e0a-4afa-91ab-71feac27fba1-kube-api-access-tdqdn\") pod \"horizon-operator-controller-manager-77d5c5b54f-6skp6\" (UID: \"99b88683-3e0a-4afa-91ab-71feac27fba1\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.965494 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.971918 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6n72\" (UniqueName: \"kubernetes.io/projected/697da6ae-2950-468c-82e9-bcb1a1af61e7-kube-api-access-b6n72\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.984314 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.985357 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.000823 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gmkq2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.001445 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.020320 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.021408 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.025612 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rkfcv" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.026773 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6lq\" (UniqueName: \"kubernetes.io/projected/b5a0c28d-1434-40f0-8759-d76b65dc2c30-kube-api-access-fv6lq\") pod \"keystone-operator-controller-manager-b8b6d4659-hktv5\" (UID: \"b5a0c28d-1434-40f0-8759-d76b65dc2c30\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.027190 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjt6k\" (UniqueName: \"kubernetes.io/projected/75e682e9-e5a5-47f1-83cc-c8004ebe224a-kube-api-access-zjt6k\") pod \"ironic-operator-controller-manager-598f7747c9-s2n6z\" (UID: \"75e682e9-e5a5-47f1-83cc-c8004ebe224a\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.077320 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.085070 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjt6k\" (UniqueName: \"kubernetes.io/projected/75e682e9-e5a5-47f1-83cc-c8004ebe224a-kube-api-access-zjt6k\") pod \"ironic-operator-controller-manager-598f7747c9-s2n6z\" (UID: \"75e682e9-e5a5-47f1-83cc-c8004ebe224a\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.103100 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.111965 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.116334 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-4hcfd" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.129758 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mf2c\" (UniqueName: \"kubernetes.io/projected/654a2c56-81a7-4b32-ad1d-c4d60b054b47-kube-api-access-7mf2c\") pod \"manila-operator-controller-manager-78c6999f6f-9lm5f\" (UID: \"654a2c56-81a7-4b32-ad1d-c4d60b054b47\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.129897 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6lq\" (UniqueName: \"kubernetes.io/projected/b5a0c28d-1434-40f0-8759-d76b65dc2c30-kube-api-access-fv6lq\") pod \"keystone-operator-controller-manager-b8b6d4659-hktv5\" (UID: \"b5a0c28d-1434-40f0-8759-d76b65dc2c30\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.134639 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.167326 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.191279 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6lq\" (UniqueName: \"kubernetes.io/projected/b5a0c28d-1434-40f0-8759-d76b65dc2c30-kube-api-access-fv6lq\") pod \"keystone-operator-controller-manager-b8b6d4659-hktv5\" (UID: \"b5a0c28d-1434-40f0-8759-d76b65dc2c30\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.216081 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.217973 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.223864 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-n9xjt" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.231701 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mf2c\" (UniqueName: \"kubernetes.io/projected/654a2c56-81a7-4b32-ad1d-c4d60b054b47-kube-api-access-7mf2c\") pod \"manila-operator-controller-manager-78c6999f6f-9lm5f\" (UID: \"654a2c56-81a7-4b32-ad1d-c4d60b054b47\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.231781 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f2vn\" (UniqueName: \"kubernetes.io/projected/9897766d-6497-4d0e-bd9a-ef8e31a08e24-kube-api-access-2f2vn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rbn84\" (UID: \"9897766d-6497-4d0e-bd9a-ef8e31a08e24\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.269394 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mf2c\" (UniqueName: \"kubernetes.io/projected/654a2c56-81a7-4b32-ad1d-c4d60b054b47-kube-api-access-7mf2c\") pod \"manila-operator-controller-manager-78c6999f6f-9lm5f\" (UID: \"654a2c56-81a7-4b32-ad1d-c4d60b054b47\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.292891 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.294117 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.296640 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dbsgd" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.317879 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.344179 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.345311 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zlwq\" (UniqueName: \"kubernetes.io/projected/367b6525-0367-437a-9fe3-b2007411f4af-kube-api-access-5zlwq\") pod \"octavia-operator-controller-manager-5f4cd88d46-4smn2\" (UID: \"367b6525-0367-437a-9fe3-b2007411f4af\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.345374 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6pmh\" (UniqueName: \"kubernetes.io/projected/873dc5cd-5c8e-417e-b99a-a52dfcfd701b-kube-api-access-m6pmh\") pod \"neutron-operator-controller-manager-78d58447c5-dlssr\" (UID: \"873dc5cd-5c8e-417e-b99a-a52dfcfd701b\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.345462 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f2vn\" (UniqueName: \"kubernetes.io/projected/9897766d-6497-4d0e-bd9a-ef8e31a08e24-kube-api-access-2f2vn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rbn84\" (UID: \"9897766d-6497-4d0e-bd9a-ef8e31a08e24\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.349910 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.350982 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.359369 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5c2rc" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.362161 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.363453 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.366756 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.380621 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.380890 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zdlj6" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.382139 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.390660 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f2vn\" (UniqueName: \"kubernetes.io/projected/9897766d-6497-4d0e-bd9a-ef8e31a08e24-kube-api-access-2f2vn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rbn84\" (UID: \"9897766d-6497-4d0e-bd9a-ef8e31a08e24\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.454654 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctn8h\" (UniqueName: \"kubernetes.io/projected/9c7284ab-b40f-4275-b85e-77aebd660135-kube-api-access-ctn8h\") pod \"nova-operator-controller-manager-7bdb645866-7mtzf\" (UID: \"9c7284ab-b40f-4275-b85e-77aebd660135\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.454794 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zlwq\" (UniqueName: \"kubernetes.io/projected/367b6525-0367-437a-9fe3-b2007411f4af-kube-api-access-5zlwq\") pod \"octavia-operator-controller-manager-5f4cd88d46-4smn2\" (UID: \"367b6525-0367-437a-9fe3-b2007411f4af\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.454840 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6pmh\" (UniqueName: \"kubernetes.io/projected/873dc5cd-5c8e-417e-b99a-a52dfcfd701b-kube-api-access-m6pmh\") pod \"neutron-operator-controller-manager-78d58447c5-dlssr\" (UID: \"873dc5cd-5c8e-417e-b99a-a52dfcfd701b\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.454977 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:26 crc kubenswrapper[4985]: E0128 18:33:26.455173 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:26 crc kubenswrapper[4985]: E0128 18:33:26.455223 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert podName:697da6ae-2950-468c-82e9-bcb1a1af61e7 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:27.455209058 +0000 UTC m=+1218.281771879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert") pod "infra-operator-controller-manager-694cf4f878-5zqpj" (UID: "697da6ae-2950-468c-82e9-bcb1a1af61e7") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.467350 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.487268 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zlwq\" (UniqueName: \"kubernetes.io/projected/367b6525-0367-437a-9fe3-b2007411f4af-kube-api-access-5zlwq\") pod \"octavia-operator-controller-manager-5f4cd88d46-4smn2\" (UID: \"367b6525-0367-437a-9fe3-b2007411f4af\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.491316 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6pmh\" (UniqueName: \"kubernetes.io/projected/873dc5cd-5c8e-417e-b99a-a52dfcfd701b-kube-api-access-m6pmh\") pod \"neutron-operator-controller-manager-78d58447c5-dlssr\" (UID: \"873dc5cd-5c8e-417e-b99a-a52dfcfd701b\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.497401 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.509383 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.549365 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.549401 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.556547 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.556663 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctn8h\" (UniqueName: \"kubernetes.io/projected/9c7284ab-b40f-4275-b85e-77aebd660135-kube-api-access-ctn8h\") pod \"nova-operator-controller-manager-7bdb645866-7mtzf\" (UID: \"9c7284ab-b40f-4275-b85e-77aebd660135\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.556691 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5kzw\" (UniqueName: \"kubernetes.io/projected/70329607-4bbe-43ad-bb7a-2b62f26af473-kube-api-access-h5kzw\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.575423 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.576549 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.578454 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6fcvv" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.581771 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctn8h\" (UniqueName: \"kubernetes.io/projected/9c7284ab-b40f-4275-b85e-77aebd660135-kube-api-access-ctn8h\") pod \"nova-operator-controller-manager-7bdb645866-7mtzf\" (UID: \"9c7284ab-b40f-4275-b85e-77aebd660135\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.585603 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.586866 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.589993 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-nw7jf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.600671 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.623827 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.651867 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.659087 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5kzw\" (UniqueName: \"kubernetes.io/projected/70329607-4bbe-43ad-bb7a-2b62f26af473-kube-api-access-h5kzw\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.659245 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:26 crc kubenswrapper[4985]: E0128 18:33:26.659579 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:26 crc kubenswrapper[4985]: E0128 18:33:26.659650 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert podName:70329607-4bbe-43ad-bb7a-2b62f26af473 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:27.159631399 +0000 UTC m=+1217.986194220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" (UID: "70329607-4bbe-43ad-bb7a-2b62f26af473") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.683960 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.686657 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.687600 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5kzw\" (UniqueName: \"kubernetes.io/projected/70329607-4bbe-43ad-bb7a-2b62f26af473-kube-api-access-h5kzw\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.688688 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9wkb5" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.694423 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.695886 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.698671 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6dpzx" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.710231 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.722018 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.727787 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.745087 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.746308 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.748410 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-kfvvt" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.769621 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g84zp\" (UniqueName: \"kubernetes.io/projected/91971c24-6187-432c-84ba-65dba69b4598-kube-api-access-g84zp\") pod \"placement-operator-controller-manager-79d5ccc684-qn5x9\" (UID: \"91971c24-6187-432c-84ba-65dba69b4598\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.769671 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x57sk\" (UniqueName: \"kubernetes.io/projected/50682373-a3d7-491e-84a0-1d5613ee2e8a-kube-api-access-x57sk\") pod \"ovn-operator-controller-manager-6f75f45d54-v5mmf\" (UID: \"50682373-a3d7-491e-84a0-1d5613ee2e8a\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.789714 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.821360 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-xzkhh"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.822576 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.824925 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-gjb5r" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.831921 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-xzkhh"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.854787 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.856004 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.859744 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.859955 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.860111 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4bpcw" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.864890 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.884681 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7prf\" (UniqueName: \"kubernetes.io/projected/1310770f-7cb7-4874-b2a0-4ef733911716-kube-api-access-s7prf\") pod \"test-operator-controller-manager-69797bbcbd-xwzkh\" (UID: \"1310770f-7cb7-4874-b2a0-4ef733911716\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.884763 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7w4p\" (UniqueName: \"kubernetes.io/projected/359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3-kube-api-access-m7w4p\") pod \"telemetry-operator-controller-manager-74c974475f-b9j67\" (UID: \"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3\") " pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.884949 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d8sp\" (UniqueName: \"kubernetes.io/projected/c95374e8-7d41-4a49-add9-7f28196d70eb-kube-api-access-5d8sp\") pod \"swift-operator-controller-manager-547cbdb99f-9kbdr\" (UID: \"c95374e8-7d41-4a49-add9-7f28196d70eb\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.885026 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g84zp\" (UniqueName: \"kubernetes.io/projected/91971c24-6187-432c-84ba-65dba69b4598-kube-api-access-g84zp\") pod \"placement-operator-controller-manager-79d5ccc684-qn5x9\" (UID: \"91971c24-6187-432c-84ba-65dba69b4598\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.885352 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.886662 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.889554 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-r5w54" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.905628 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.911513 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x57sk\" (UniqueName: \"kubernetes.io/projected/50682373-a3d7-491e-84a0-1d5613ee2e8a-kube-api-access-x57sk\") pod \"ovn-operator-controller-manager-6f75f45d54-v5mmf\" (UID: \"50682373-a3d7-491e-84a0-1d5613ee2e8a\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.928102 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g84zp\" (UniqueName: \"kubernetes.io/projected/91971c24-6187-432c-84ba-65dba69b4598-kube-api-access-g84zp\") pod \"placement-operator-controller-manager-79d5ccc684-qn5x9\" (UID: \"91971c24-6187-432c-84ba-65dba69b4598\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.931961 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x57sk\" (UniqueName: \"kubernetes.io/projected/50682373-a3d7-491e-84a0-1d5613ee2e8a-kube-api-access-x57sk\") pod \"ovn-operator-controller-manager-6f75f45d54-v5mmf\" (UID: \"50682373-a3d7-491e-84a0-1d5613ee2e8a\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.976809 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj"] Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014170 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7prf\" (UniqueName: \"kubernetes.io/projected/1310770f-7cb7-4874-b2a0-4ef733911716-kube-api-access-s7prf\") pod \"test-operator-controller-manager-69797bbcbd-xwzkh\" (UID: \"1310770f-7cb7-4874-b2a0-4ef733911716\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014457 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2pcn\" (UniqueName: \"kubernetes.io/projected/d4d6e990-839d-4186-9382-1a67922556df-kube-api-access-s2pcn\") pod \"watcher-operator-controller-manager-564965969-xzkhh\" (UID: \"d4d6e990-839d-4186-9382-1a67922556df\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014485 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7w4p\" (UniqueName: \"kubernetes.io/projected/359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3-kube-api-access-m7w4p\") pod \"telemetry-operator-controller-manager-74c974475f-b9j67\" (UID: \"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3\") " pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014538 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxbxh\" (UniqueName: \"kubernetes.io/projected/38846228-cec9-4a59-b9bb-c766121dacde-kube-api-access-zxbxh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7s7s2\" (UID: \"38846228-cec9-4a59-b9bb-c766121dacde\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014560 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmsxl\" (UniqueName: \"kubernetes.io/projected/c1e8524e-e047-4872-9ee1-ae4e013f8825-kube-api-access-wmsxl\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014582 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014598 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014642 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d8sp\" (UniqueName: \"kubernetes.io/projected/c95374e8-7d41-4a49-add9-7f28196d70eb-kube-api-access-5d8sp\") pod \"swift-operator-controller-manager-547cbdb99f-9kbdr\" (UID: \"c95374e8-7d41-4a49-add9-7f28196d70eb\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.047220 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7prf\" (UniqueName: \"kubernetes.io/projected/1310770f-7cb7-4874-b2a0-4ef733911716-kube-api-access-s7prf\") pod \"test-operator-controller-manager-69797bbcbd-xwzkh\" (UID: \"1310770f-7cb7-4874-b2a0-4ef733911716\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.048810 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.050704 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d8sp\" (UniqueName: \"kubernetes.io/projected/c95374e8-7d41-4a49-add9-7f28196d70eb-kube-api-access-5d8sp\") pod \"swift-operator-controller-manager-547cbdb99f-9kbdr\" (UID: \"c95374e8-7d41-4a49-add9-7f28196d70eb\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.077986 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7w4p\" (UniqueName: \"kubernetes.io/projected/359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3-kube-api-access-m7w4p\") pod \"telemetry-operator-controller-manager-74c974475f-b9j67\" (UID: \"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3\") " pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.116178 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2pcn\" (UniqueName: \"kubernetes.io/projected/d4d6e990-839d-4186-9382-1a67922556df-kube-api-access-s2pcn\") pod \"watcher-operator-controller-manager-564965969-xzkhh\" (UID: \"d4d6e990-839d-4186-9382-1a67922556df\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.116313 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxbxh\" (UniqueName: \"kubernetes.io/projected/38846228-cec9-4a59-b9bb-c766121dacde-kube-api-access-zxbxh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7s7s2\" (UID: \"38846228-cec9-4a59-b9bb-c766121dacde\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.116349 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmsxl\" (UniqueName: \"kubernetes.io/projected/c1e8524e-e047-4872-9ee1-ae4e013f8825-kube-api-access-wmsxl\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.116378 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.116407 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.116554 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.116627 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:27.616607371 +0000 UTC m=+1218.443170192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.117382 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.117424 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:27.617413654 +0000 UTC m=+1218.443976475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "metrics-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.136390 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxbxh\" (UniqueName: \"kubernetes.io/projected/38846228-cec9-4a59-b9bb-c766121dacde-kube-api-access-zxbxh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7s7s2\" (UID: \"38846228-cec9-4a59-b9bb-c766121dacde\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.137961 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2pcn\" (UniqueName: \"kubernetes.io/projected/d4d6e990-839d-4186-9382-1a67922556df-kube-api-access-s2pcn\") pod \"watcher-operator-controller-manager-564965969-xzkhh\" (UID: \"d4d6e990-839d-4186-9382-1a67922556df\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.138000 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmsxl\" (UniqueName: \"kubernetes.io/projected/c1e8524e-e047-4872-9ee1-ae4e013f8825-kube-api-access-wmsxl\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.141805 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.144830 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh"] Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.145048 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" event={"ID":"4fa1b302-aad3-4e6e-9cd2-bba65262c1e8","Type":"ContainerStarted","Data":"1c765d46b3cfb7ae3cdf987f0a72114eba08370d5ed07c2d070bcbfc78236f56"} Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.165108 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.177222 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.217837 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.218053 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.218110 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert podName:70329607-4bbe-43ad-bb7a-2b62f26af473 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:28.218090396 +0000 UTC m=+1219.044653217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" (UID: "70329607-4bbe-43ad-bb7a-2b62f26af473") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.291911 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.306968 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.368455 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.522627 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh"] Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.524899 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.525129 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.525189 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert podName:697da6ae-2950-468c-82e9-bcb1a1af61e7 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:29.525172236 +0000 UTC m=+1220.351735057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert") pod "infra-operator-controller-manager-694cf4f878-5zqpj" (UID: "697da6ae-2950-468c-82e9-bcb1a1af61e7") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.554154 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84"] Jan 28 18:33:27 crc kubenswrapper[4985]: W0128 18:33:27.557121 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99893bb5_33ef_4159_bf8f_1c79a58e74d9.slice/crio-5ed3ee498bd37c360476eb7e76c38d91112c1eb0d1874cd6aceaba58577cae7e WatchSource:0}: Error finding container 5ed3ee498bd37c360476eb7e76c38d91112c1eb0d1874cd6aceaba58577cae7e: Status 404 returned error can't find the container with id 5ed3ee498bd37c360476eb7e76c38d91112c1eb0d1874cd6aceaba58577cae7e Jan 28 18:33:27 crc kubenswrapper[4985]: W0128 18:33:27.560921 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dfb4621_d061_4224_8aee_840726565aa3.slice/crio-9ec18a9a77ad0fdc4f804273a8abe29c520a02bdab0106a53a2c839719a8c029 WatchSource:0}: Error finding container 9ec18a9a77ad0fdc4f804273a8abe29c520a02bdab0106a53a2c839719a8c029: Status 404 returned error can't find the container with id 9ec18a9a77ad0fdc4f804273a8abe29c520a02bdab0106a53a2c839719a8c029 Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.568604 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr"] Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.626722 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.626791 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.627108 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.627179 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:28.627159685 +0000 UTC m=+1219.453722506 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.627108 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.627716 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:28.62770208 +0000 UTC m=+1219.454264901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "metrics-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.122105 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.139274 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2"] Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.150219 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod873dc5cd_5c8e_417e_b99a_a52dfcfd701b.slice/crio-94846cb5686126f16d6556aa80994aabf816e2a6268715c2a885ea4c0d524965 WatchSource:0}: Error finding container 94846cb5686126f16d6556aa80994aabf816e2a6268715c2a885ea4c0d524965: Status 404 returned error can't find the container with id 94846cb5686126f16d6556aa80994aabf816e2a6268715c2a885ea4c0d524965 Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.150541 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod654a2c56_81a7_4b32_ad1d_c4d60b054b47.slice/crio-3226c3e56950335b52dfc2884483dd6cf371022588ebf9eea7dca7bda293b6b5 WatchSource:0}: Error finding container 3226c3e56950335b52dfc2884483dd6cf371022588ebf9eea7dca7bda293b6b5: Status 404 returned error can't find the container with id 3226c3e56950335b52dfc2884483dd6cf371022588ebf9eea7dca7bda293b6b5 Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.150840 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c7284ab_b40f_4275_b85e_77aebd660135.slice/crio-f5bf03c5437bcca2b7d7f1632d6f9b02968d89997187b55ae146d48ba4b887ea WatchSource:0}: Error finding container f5bf03c5437bcca2b7d7f1632d6f9b02968d89997187b55ae146d48ba4b887ea: Status 404 returned error can't find the container with id f5bf03c5437bcca2b7d7f1632d6f9b02968d89997187b55ae146d48ba4b887ea Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.151027 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6"] Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.151177 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99b88683_3e0a_4afa_91ab_71feac27fba1.slice/crio-950783a93d654fc9c4324b80ba1ddbb41213709ccc769713e4f93646fd2a9aed WatchSource:0}: Error finding container 950783a93d654fc9c4324b80ba1ddbb41213709ccc769713e4f93646fd2a9aed: Status 404 returned error can't find the container with id 950783a93d654fc9c4324b80ba1ddbb41213709ccc769713e4f93646fd2a9aed Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.160070 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z"] Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.165131 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod367b6525_0367_437a_9fe3_b2007411f4af.slice/crio-f56b3d96fddac1a73206f53aabfbfc3690deab36214d567ff1d8b8902021347f WatchSource:0}: Error finding container f56b3d96fddac1a73206f53aabfbfc3690deab36214d567ff1d8b8902021347f: Status 404 returned error can't find the container with id f56b3d96fddac1a73206f53aabfbfc3690deab36214d567ff1d8b8902021347f Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.179864 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.180460 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" event={"ID":"99893bb5-33ef-4159-bf8f-1c79a58e74d9","Type":"ContainerStarted","Data":"5ed3ee498bd37c360476eb7e76c38d91112c1eb0d1874cd6aceaba58577cae7e"} Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.189083 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.199455 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" event={"ID":"cc7f29e1-e6e0-45a0-920a-4b18d8204c65","Type":"ContainerStarted","Data":"38827df845490c23083bfe7ad56408d36b7f133ee4205b5d8f2c508acb6f51bb"} Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.199693 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.202222 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" event={"ID":"4dfb4621-d061-4224-8aee-840726565aa3","Type":"ContainerStarted","Data":"9ec18a9a77ad0fdc4f804273a8abe29c520a02bdab0106a53a2c839719a8c029"} Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.203797 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" event={"ID":"7ef21481-ade5-436a-ae3a-f284a7e438d3","Type":"ContainerStarted","Data":"07b41414d7e1ab56b15b8ff840c83af0b9ece1889e20e7b35e89d692e025a4f6"} Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.204779 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" event={"ID":"75e682e9-e5a5-47f1-83cc-c8004ebe224a","Type":"ContainerStarted","Data":"533a43b63baaef4c48b0595f64bf2da5a0cf4bf59f804a0b873b863aa677d7fc"} Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.241804 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.241983 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.242051 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert podName:70329607-4bbe-43ad-bb7a-2b62f26af473 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:30.242031345 +0000 UTC m=+1221.068594166 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" (UID: "70329607-4bbe-43ad-bb7a-2b62f26af473") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.420611 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.448172 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.463531 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.473768 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr"] Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.501240 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9897766d_6497_4d0e_bd9a_ef8e31a08e24.slice/crio-f22a32963324473c47accb3f5fce6d50e0a1ff7c411a434f2df61c0d594ee00f WatchSource:0}: Error finding container f22a32963324473c47accb3f5fce6d50e0a1ff7c411a434f2df61c0d594ee00f: Status 404 returned error can't find the container with id f22a32963324473c47accb3f5fce6d50e0a1ff7c411a434f2df61c0d594ee00f Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.501831 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc95374e8_7d41_4a49_add9_7f28196d70eb.slice/crio-9d7afa29f693b25687d16ef718f2bf55cc4d3cbeec40ff5fce66297d9571afeb WatchSource:0}: Error finding container 9d7afa29f693b25687d16ef718f2bf55cc4d3cbeec40ff5fce66297d9571afeb: Status 404 returned error can't find the container with id 9d7afa29f693b25687d16ef718f2bf55cc4d3cbeec40ff5fce66297d9571afeb Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.650291 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.650349 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.650625 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.650699 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:30.650679752 +0000 UTC m=+1221.477242573 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "webhook-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.650760 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.650789 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:30.650779845 +0000 UTC m=+1221.477342666 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "metrics-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.664906 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.674735 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh"] Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.685885 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38846228_cec9_4a59_b9bb_c766121dacde.slice/crio-a1fd34a0d34d1ee86d4d600619c1244234a4ca8e4e1e0600d5cbaaefe798df5a WatchSource:0}: Error finding container a1fd34a0d34d1ee86d4d600619c1244234a4ca8e4e1e0600d5cbaaefe798df5a: Status 404 returned error can't find the container with id a1fd34a0d34d1ee86d4d600619c1244234a4ca8e4e1e0600d5cbaaefe798df5a Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.688108 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-xzkhh"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.697944 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67"] Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.699840 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod359fd3be_e8b7_4f51_bb1d_a5d8bdc228c3.slice/crio-6d19c1d188ea416165240754b078175bbfd0bf8f297d5cb78c4a0bc97c7fca7f WatchSource:0}: Error finding container 6d19c1d188ea416165240754b078175bbfd0bf8f297d5cb78c4a0bc97c7fca7f: Status 404 returned error can't find the container with id 6d19c1d188ea416165240754b078175bbfd0bf8f297d5cb78c4a0bc97c7fca7f Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.701552 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s7prf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-xwzkh_openstack-operators(1310770f-7cb7-4874-b2a0-4ef733911716): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.703644 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.712225 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.132:5001/openstack-k8s-operators/telemetry-operator:78376376ba0b23dd44ee177d28d423a994de68bb,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7w4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-74c974475f-b9j67_openstack-operators(359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.713394 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podUID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.719424 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2pcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-xzkhh_openstack-operators(d4d6e990-839d-4186-9382-1a67922556df): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.720633 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.218327 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" event={"ID":"c95374e8-7d41-4a49-add9-7f28196d70eb","Type":"ContainerStarted","Data":"9d7afa29f693b25687d16ef718f2bf55cc4d3cbeec40ff5fce66297d9571afeb"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.220093 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" event={"ID":"d4d6e990-839d-4186-9382-1a67922556df","Type":"ContainerStarted","Data":"8bd4c59f1b88139542870f0eac8ceb9141b65af7edd0cfb46e3ef029d2d339e3"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.222403 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" event={"ID":"38846228-cec9-4a59-b9bb-c766121dacde","Type":"ContainerStarted","Data":"a1fd34a0d34d1ee86d4d600619c1244234a4ca8e4e1e0600d5cbaaefe798df5a"} Jan 28 18:33:29 crc kubenswrapper[4985]: E0128 18:33:29.222696 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.227795 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" event={"ID":"367b6525-0367-437a-9fe3-b2007411f4af","Type":"ContainerStarted","Data":"f56b3d96fddac1a73206f53aabfbfc3690deab36214d567ff1d8b8902021347f"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.239059 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" event={"ID":"654a2c56-81a7-4b32-ad1d-c4d60b054b47","Type":"ContainerStarted","Data":"3226c3e56950335b52dfc2884483dd6cf371022588ebf9eea7dca7bda293b6b5"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.245936 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" event={"ID":"50682373-a3d7-491e-84a0-1d5613ee2e8a","Type":"ContainerStarted","Data":"1885650fb2939d0a3e8b331c3e371a5feffffd540e2271ca517ab31770e313cf"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.248382 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" event={"ID":"9897766d-6497-4d0e-bd9a-ef8e31a08e24","Type":"ContainerStarted","Data":"f22a32963324473c47accb3f5fce6d50e0a1ff7c411a434f2df61c0d594ee00f"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.250103 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" event={"ID":"873dc5cd-5c8e-417e-b99a-a52dfcfd701b","Type":"ContainerStarted","Data":"94846cb5686126f16d6556aa80994aabf816e2a6268715c2a885ea4c0d524965"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.252924 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" event={"ID":"99b88683-3e0a-4afa-91ab-71feac27fba1","Type":"ContainerStarted","Data":"950783a93d654fc9c4324b80ba1ddbb41213709ccc769713e4f93646fd2a9aed"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.262848 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" event={"ID":"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3","Type":"ContainerStarted","Data":"6d19c1d188ea416165240754b078175bbfd0bf8f297d5cb78c4a0bc97c7fca7f"} Jan 28 18:33:29 crc kubenswrapper[4985]: E0128 18:33:29.265948 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/openstack-k8s-operators/telemetry-operator:78376376ba0b23dd44ee177d28d423a994de68bb\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podUID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.277354 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" event={"ID":"91971c24-6187-432c-84ba-65dba69b4598","Type":"ContainerStarted","Data":"797597753d738831804c41e63a07a1ab4d238d1592e2cd57bf33e019b0a8261a"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.277403 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" event={"ID":"9c7284ab-b40f-4275-b85e-77aebd660135","Type":"ContainerStarted","Data":"f5bf03c5437bcca2b7d7f1632d6f9b02968d89997187b55ae146d48ba4b887ea"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.293211 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" event={"ID":"1310770f-7cb7-4874-b2a0-4ef733911716","Type":"ContainerStarted","Data":"b1b03445d0106999db73a6aa3bfa5147243f4a023495cb71ae9b47af73b36b54"} Jan 28 18:33:29 crc kubenswrapper[4985]: E0128 18:33:29.294628 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.298387 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" event={"ID":"b5a0c28d-1434-40f0-8759-d76b65dc2c30","Type":"ContainerStarted","Data":"841b1b41f3d001fa1b16fadde23957fb41377241b955ac2022a56af285c60a7e"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.590507 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:29 crc kubenswrapper[4985]: E0128 18:33:29.590716 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:29 crc kubenswrapper[4985]: E0128 18:33:29.590763 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert podName:697da6ae-2950-468c-82e9-bcb1a1af61e7 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:33.590749291 +0000 UTC m=+1224.417312112 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert") pod "infra-operator-controller-manager-694cf4f878-5zqpj" (UID: "697da6ae-2950-468c-82e9-bcb1a1af61e7") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.308486 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:30 crc kubenswrapper[4985]: I0128 18:33:30.308497 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.308559 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert podName:70329607-4bbe-43ad-bb7a-2b62f26af473 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:34.308539526 +0000 UTC m=+1225.135102347 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" (UID: "70329607-4bbe-43ad-bb7a-2b62f26af473") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.343079 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/openstack-k8s-operators/telemetry-operator:78376376ba0b23dd44ee177d28d423a994de68bb\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podUID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.346406 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.351110 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" Jan 28 18:33:30 crc kubenswrapper[4985]: I0128 18:33:30.718403 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:30 crc kubenswrapper[4985]: I0128 18:33:30.718862 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.718796 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.718967 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:34.718951823 +0000 UTC m=+1225.545514644 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "metrics-server-cert" not found Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.719114 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.719162 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:34.719153049 +0000 UTC m=+1225.545715870 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "webhook-server-cert" not found Jan 28 18:33:33 crc kubenswrapper[4985]: I0128 18:33:33.675233 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:33 crc kubenswrapper[4985]: E0128 18:33:33.675417 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:33 crc kubenswrapper[4985]: E0128 18:33:33.675506 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert podName:697da6ae-2950-468c-82e9-bcb1a1af61e7 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:41.675482633 +0000 UTC m=+1232.502045474 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert") pod "infra-operator-controller-manager-694cf4f878-5zqpj" (UID: "697da6ae-2950-468c-82e9-bcb1a1af61e7") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:34 crc kubenswrapper[4985]: I0128 18:33:34.401984 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:34 crc kubenswrapper[4985]: E0128 18:33:34.402219 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:34 crc kubenswrapper[4985]: E0128 18:33:34.402303 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert podName:70329607-4bbe-43ad-bb7a-2b62f26af473 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:42.402277262 +0000 UTC m=+1233.228840083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" (UID: "70329607-4bbe-43ad-bb7a-2b62f26af473") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:34 crc kubenswrapper[4985]: I0128 18:33:34.809743 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:34 crc kubenswrapper[4985]: I0128 18:33:34.810058 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:34 crc kubenswrapper[4985]: E0128 18:33:34.809916 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:33:34 crc kubenswrapper[4985]: E0128 18:33:34.810194 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:42.810175018 +0000 UTC m=+1233.636737839 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "metrics-server-cert" not found Jan 28 18:33:34 crc kubenswrapper[4985]: E0128 18:33:34.810230 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:33:34 crc kubenswrapper[4985]: E0128 18:33:34.810300 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:42.810285051 +0000 UTC m=+1233.636847872 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "webhook-server-cert" not found Jan 28 18:33:41 crc kubenswrapper[4985]: E0128 18:33:41.573051 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 28 18:33:41 crc kubenswrapper[4985]: E0128 18:33:41.573847 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2f2vn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-rbn84_openstack-operators(9897766d-6497-4d0e-bd9a-ef8e31a08e24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:41 crc kubenswrapper[4985]: E0128 18:33:41.575199 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" Jan 28 18:33:41 crc kubenswrapper[4985]: I0128 18:33:41.746598 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:41 crc kubenswrapper[4985]: E0128 18:33:41.746800 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:41 crc kubenswrapper[4985]: E0128 18:33:41.746855 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert podName:697da6ae-2950-468c-82e9-bcb1a1af61e7 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:57.746839125 +0000 UTC m=+1248.573401956 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert") pod "infra-operator-controller-manager-694cf4f878-5zqpj" (UID: "697da6ae-2950-468c-82e9-bcb1a1af61e7") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:42 crc kubenswrapper[4985]: I0128 18:33:42.462767 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:42 crc kubenswrapper[4985]: I0128 18:33:42.476135 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:42 crc kubenswrapper[4985]: E0128 18:33:42.500241 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" Jan 28 18:33:42 crc kubenswrapper[4985]: I0128 18:33:42.635977 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:42 crc kubenswrapper[4985]: I0128 18:33:42.872346 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:42 crc kubenswrapper[4985]: I0128 18:33:42.872412 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:42 crc kubenswrapper[4985]: E0128 18:33:42.872784 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:33:42 crc kubenswrapper[4985]: E0128 18:33:42.872936 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:58.872897527 +0000 UTC m=+1249.699460348 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "webhook-server-cert" not found Jan 28 18:33:42 crc kubenswrapper[4985]: I0128 18:33:42.885013 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:43 crc kubenswrapper[4985]: E0128 18:33:43.912993 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 28 18:33:43 crc kubenswrapper[4985]: E0128 18:33:43.913453 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x57sk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-v5mmf_openstack-operators(50682373-a3d7-491e-84a0-1d5613ee2e8a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:43 crc kubenswrapper[4985]: E0128 18:33:43.914588 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" Jan 28 18:33:44 crc kubenswrapper[4985]: E0128 18:33:44.520420 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" Jan 28 18:33:45 crc kubenswrapper[4985]: E0128 18:33:45.031986 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 28 18:33:45 crc kubenswrapper[4985]: E0128 18:33:45.032185 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5d8sp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-9kbdr_openstack-operators(c95374e8-7d41-4a49-add9-7f28196d70eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:45 crc kubenswrapper[4985]: E0128 18:33:45.033434 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" podUID="c95374e8-7d41-4a49-add9-7f28196d70eb" Jan 28 18:33:45 crc kubenswrapper[4985]: E0128 18:33:45.526893 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" podUID="c95374e8-7d41-4a49-add9-7f28196d70eb" Jan 28 18:33:46 crc kubenswrapper[4985]: E0128 18:33:46.838615 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd" Jan 28 18:33:46 crc kubenswrapper[4985]: E0128 18:33:46.838868 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zlwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-4smn2_openstack-operators(367b6525-0367-437a-9fe3-b2007411f4af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:46 crc kubenswrapper[4985]: E0128 18:33:46.840364 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" podUID="367b6525-0367-437a-9fe3-b2007411f4af" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.433190 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.433430 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g84zp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-qn5x9_openstack-operators(91971c24-6187-432c-84ba-65dba69b4598): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.434674 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.556563 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" podUID="367b6525-0367-437a-9fe3-b2007411f4af" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.557056 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.978789 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.978997 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mf2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-9lm5f_openstack-operators(654a2c56-81a7-4b32-ad1d-c4d60b054b47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.980204 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" podUID="654a2c56-81a7-4b32-ad1d-c4d60b054b47" Jan 28 18:33:48 crc kubenswrapper[4985]: E0128 18:33:48.568766 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" podUID="654a2c56-81a7-4b32-ad1d-c4d60b054b47" Jan 28 18:33:49 crc kubenswrapper[4985]: I0128 18:33:49.849955 4985 scope.go:117] "RemoveContainer" containerID="d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96" Jan 28 18:33:51 crc kubenswrapper[4985]: E0128 18:33:51.614707 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 28 18:33:51 crc kubenswrapper[4985]: E0128 18:33:51.615108 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6pmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-dlssr_openstack-operators(873dc5cd-5c8e-417e-b99a-a52dfcfd701b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:51 crc kubenswrapper[4985]: E0128 18:33:51.616579 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" Jan 28 18:33:52 crc kubenswrapper[4985]: E0128 18:33:52.211837 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 28 18:33:52 crc kubenswrapper[4985]: E0128 18:33:52.212081 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p2qth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-fm7nr_openstack-operators(cc7f29e1-e6e0-45a0-920a-4b18d8204c65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:52 crc kubenswrapper[4985]: E0128 18:33:52.213525 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" Jan 28 18:33:52 crc kubenswrapper[4985]: E0128 18:33:52.597444 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" Jan 28 18:33:52 crc kubenswrapper[4985]: E0128 18:33:52.597887 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" Jan 28 18:33:54 crc kubenswrapper[4985]: E0128 18:33:54.997813 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 28 18:33:54 crc kubenswrapper[4985]: E0128 18:33:54.998041 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tdqdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-6skp6_openstack-operators(99b88683-3e0a-4afa-91ab-71feac27fba1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:54 crc kubenswrapper[4985]: E0128 18:33:54.999299 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" podUID="99b88683-3e0a-4afa-91ab-71feac27fba1" Jan 28 18:33:55 crc kubenswrapper[4985]: E0128 18:33:55.562215 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 28 18:33:55 crc kubenswrapper[4985]: E0128 18:33:55.562650 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b2z62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-75d84_openstack-operators(4dfb4621-d061-4224-8aee-840726565aa3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:55 crc kubenswrapper[4985]: E0128 18:33:55.563818 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" podUID="4dfb4621-d061-4224-8aee-840726565aa3" Jan 28 18:33:55 crc kubenswrapper[4985]: E0128 18:33:55.628457 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" podUID="4dfb4621-d061-4224-8aee-840726565aa3" Jan 28 18:33:55 crc kubenswrapper[4985]: E0128 18:33:55.628709 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" podUID="99b88683-3e0a-4afa-91ab-71feac27fba1" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.244170 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.244825 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fv6lq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-hktv5_openstack-operators(b5a0c28d-1434-40f0-8759-d76b65dc2c30): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.246153 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" podUID="b5a0c28d-1434-40f0-8759-d76b65dc2c30" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.637283 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" podUID="b5a0c28d-1434-40f0-8759-d76b65dc2c30" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.986110 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.986322 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s7prf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-xwzkh_openstack-operators(1310770f-7cb7-4874-b2a0-4ef733911716): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.987580 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" Jan 28 18:33:57 crc kubenswrapper[4985]: E0128 18:33:57.509888 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 28 18:33:57 crc kubenswrapper[4985]: E0128 18:33:57.510443 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2pcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-xzkhh_openstack-operators(d4d6e990-839d-4186-9382-1a67922556df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:57 crc kubenswrapper[4985]: E0128 18:33:57.511624 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" Jan 28 18:33:57 crc kubenswrapper[4985]: I0128 18:33:57.823595 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:57 crc kubenswrapper[4985]: I0128 18:33:57.832692 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:57 crc kubenswrapper[4985]: I0128 18:33:57.955770 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:58 crc kubenswrapper[4985]: I0128 18:33:58.905294 4985 scope.go:117] "RemoveContainer" containerID="191c84609dfb2c8268e33648b1fa5d4251ffb2f7286e97b627cb86dee2d94615" Jan 28 18:33:58 crc kubenswrapper[4985]: E0128 18:33:58.926039 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 28 18:33:58 crc kubenswrapper[4985]: E0128 18:33:58.926248 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zxbxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7s7s2_openstack-operators(38846228-cec9-4a59-b9bb-c766121dacde): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:58 crc kubenswrapper[4985]: E0128 18:33:58.927500 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" podUID="38846228-cec9-4a59-b9bb-c766121dacde" Jan 28 18:33:58 crc kubenswrapper[4985]: I0128 18:33:58.951417 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:58 crc kubenswrapper[4985]: I0128 18:33:58.959473 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.256303 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.443020 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz"] Jan 28 18:33:59 crc kubenswrapper[4985]: W0128 18:33:59.444163 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70329607_4bbe_43ad_bb7a_2b62f26af473.slice/crio-3a28077655ae09027099f0e849e32bf28ac0e788d40fd7454cac4924a0de6132 WatchSource:0}: Error finding container 3a28077655ae09027099f0e849e32bf28ac0e788d40fd7454cac4924a0de6132: Status 404 returned error can't find the container with id 3a28077655ae09027099f0e849e32bf28ac0e788d40fd7454cac4924a0de6132 Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.631200 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj"] Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.654497 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w"] Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.669088 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" event={"ID":"75e682e9-e5a5-47f1-83cc-c8004ebe224a","Type":"ContainerStarted","Data":"596b4dba169c9d1346382306092c265742b4366e6f0e6de87ce3064127855dd0"} Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.669394 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.671871 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" event={"ID":"70329607-4bbe-43ad-bb7a-2b62f26af473","Type":"ContainerStarted","Data":"3a28077655ae09027099f0e849e32bf28ac0e788d40fd7454cac4924a0de6132"} Jan 28 18:33:59 crc kubenswrapper[4985]: W0128 18:33:59.674966 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod697da6ae_2950_468c_82e9_bcb1a1af61e7.slice/crio-2a322700646174507eb4bc8f892ce834e331fdf9781e07517816051ba142d930 WatchSource:0}: Error finding container 2a322700646174507eb4bc8f892ce834e331fdf9781e07517816051ba142d930: Status 404 returned error can't find the container with id 2a322700646174507eb4bc8f892ce834e331fdf9781e07517816051ba142d930 Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.675540 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" event={"ID":"4fa1b302-aad3-4e6e-9cd2-bba65262c1e8","Type":"ContainerStarted","Data":"5365af029ad5ded9a998e8f9e1cd3a0cd10f3a5754f748b72b8396f401214696"} Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.676130 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.678703 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" event={"ID":"9c7284ab-b40f-4275-b85e-77aebd660135","Type":"ContainerStarted","Data":"ac9d4b13d281d4e9fb7fc67135b7b9665a8e3d5bfc5600b7571ded9088424b3d"} Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.678910 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.681374 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" event={"ID":"7ef21481-ade5-436a-ae3a-f284a7e438d3","Type":"ContainerStarted","Data":"b754f63e41c81ccfe7cbc1779be3894eb7b9b60785b05928a0f95f05a01db4aa"} Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.681471 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:33:59 crc kubenswrapper[4985]: E0128 18:33:59.682173 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" podUID="38846228-cec9-4a59-b9bb-c766121dacde" Jan 28 18:33:59 crc kubenswrapper[4985]: W0128 18:33:59.687291 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1e8524e_e047_4872_9ee1_ae4e013f8825.slice/crio-9ed0beaeedd642e690ab7450823f7803959f39b403f7d6fdee0e93680f3c49f6 WatchSource:0}: Error finding container 9ed0beaeedd642e690ab7450823f7803959f39b403f7d6fdee0e93680f3c49f6: Status 404 returned error can't find the container with id 9ed0beaeedd642e690ab7450823f7803959f39b403f7d6fdee0e93680f3c49f6 Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.690770 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" podStartSLOduration=4.739668547 podStartE2EDuration="34.690748873s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.149190813 +0000 UTC m=+1218.975753634" lastFinishedPulling="2026-01-28 18:33:58.100271139 +0000 UTC m=+1248.926833960" observedRunningTime="2026-01-28 18:33:59.683660163 +0000 UTC m=+1250.510222984" watchObservedRunningTime="2026-01-28 18:33:59.690748873 +0000 UTC m=+1250.517311694" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.718957 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" podStartSLOduration=5.664042703 podStartE2EDuration="34.718938749s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:27.157498665 +0000 UTC m=+1217.984061486" lastFinishedPulling="2026-01-28 18:33:56.212394711 +0000 UTC m=+1247.038957532" observedRunningTime="2026-01-28 18:33:59.718471495 +0000 UTC m=+1250.545034316" watchObservedRunningTime="2026-01-28 18:33:59.718938749 +0000 UTC m=+1250.545501570" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.809133 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" podStartSLOduration=5.485994197 podStartE2EDuration="34.809111565s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:26.889072167 +0000 UTC m=+1217.715634988" lastFinishedPulling="2026-01-28 18:33:56.212189535 +0000 UTC m=+1247.038752356" observedRunningTime="2026-01-28 18:33:59.779309593 +0000 UTC m=+1250.605872424" watchObservedRunningTime="2026-01-28 18:33:59.809111565 +0000 UTC m=+1250.635674386" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.812912 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" podStartSLOduration=4.883381594 podStartE2EDuration="34.812891651s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.17066188 +0000 UTC m=+1218.997224701" lastFinishedPulling="2026-01-28 18:33:58.100171937 +0000 UTC m=+1248.926734758" observedRunningTime="2026-01-28 18:33:59.802712884 +0000 UTC m=+1250.629275705" watchObservedRunningTime="2026-01-28 18:33:59.812891651 +0000 UTC m=+1250.639454472" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.697130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" event={"ID":"99893bb5-33ef-4159-bf8f-1c79a58e74d9","Type":"ContainerStarted","Data":"233a43b6b8981b47ec5714f819a1eee5418974ea1fc4d83d0b402ba20404e013"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.697516 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.702137 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" event={"ID":"50682373-a3d7-491e-84a0-1d5613ee2e8a","Type":"ContainerStarted","Data":"ff10dd6aec762e5c6f8ac00bc0e5212cc4c9ba6fe7bf3a0a1e2f0ca6c68d8b77"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.702493 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.704265 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" event={"ID":"9897766d-6497-4d0e-bd9a-ef8e31a08e24","Type":"ContainerStarted","Data":"244f2175d0f0083282126d17a82f0ff642cfc28ca6ee1538cedf6e4920fb3907"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.704760 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.707520 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" event={"ID":"697da6ae-2950-468c-82e9-bcb1a1af61e7","Type":"ContainerStarted","Data":"2a322700646174507eb4bc8f892ce834e331fdf9781e07517816051ba142d930"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.710646 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" event={"ID":"c1e8524e-e047-4872-9ee1-ae4e013f8825","Type":"ContainerStarted","Data":"5f53fa7d92091209441e8e64320cea938b2d017d0c909c4229f125c84c482055"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.710785 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" event={"ID":"c1e8524e-e047-4872-9ee1-ae4e013f8825","Type":"ContainerStarted","Data":"9ed0beaeedd642e690ab7450823f7803959f39b403f7d6fdee0e93680f3c49f6"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.711072 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.714379 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" event={"ID":"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3","Type":"ContainerStarted","Data":"33e8754f74c0d539b6d740cc1480faa9b0b2b64b42c058d6a29292cd2a6ebd3c"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.714641 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.716432 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" event={"ID":"c95374e8-7d41-4a49-add9-7f28196d70eb","Type":"ContainerStarted","Data":"1fed3409e13546ceae0b5c7a89f2c6b82737a4ae622cdb4f7150010d61389b1f"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.734767 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" podStartSLOduration=7.083679323 podStartE2EDuration="35.734744228s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:27.561281405 +0000 UTC m=+1218.387844226" lastFinishedPulling="2026-01-28 18:33:56.21234627 +0000 UTC m=+1247.038909131" observedRunningTime="2026-01-28 18:34:00.719927639 +0000 UTC m=+1251.546490470" watchObservedRunningTime="2026-01-28 18:34:00.734744228 +0000 UTC m=+1251.561307049" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.748650 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" podStartSLOduration=5.048398433 podStartE2EDuration="35.748605439s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.503919238 +0000 UTC m=+1219.330482059" lastFinishedPulling="2026-01-28 18:33:59.204126244 +0000 UTC m=+1250.030689065" observedRunningTime="2026-01-28 18:34:00.739653956 +0000 UTC m=+1251.566216777" watchObservedRunningTime="2026-01-28 18:34:00.748605439 +0000 UTC m=+1251.575168260" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.794007 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podStartSLOduration=34.79396812 podStartE2EDuration="34.79396812s" podCreationTimestamp="2026-01-28 18:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:34:00.791668555 +0000 UTC m=+1251.618231376" watchObservedRunningTime="2026-01-28 18:34:00.79396812 +0000 UTC m=+1251.620530941" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.850434 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podStartSLOduration=5.356023908 podStartE2EDuration="35.850407323s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.503314241 +0000 UTC m=+1219.329877062" lastFinishedPulling="2026-01-28 18:33:58.997697646 +0000 UTC m=+1249.824260477" observedRunningTime="2026-01-28 18:34:00.820217131 +0000 UTC m=+1251.646779972" watchObservedRunningTime="2026-01-28 18:34:00.850407323 +0000 UTC m=+1251.676970144" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.862850 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podStartSLOduration=5.144762593 podStartE2EDuration="35.862828184s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.456426467 +0000 UTC m=+1219.282989288" lastFinishedPulling="2026-01-28 18:33:59.174492058 +0000 UTC m=+1250.001054879" observedRunningTime="2026-01-28 18:34:00.851199235 +0000 UTC m=+1251.677762046" watchObservedRunningTime="2026-01-28 18:34:00.862828184 +0000 UTC m=+1251.689391005" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.885809 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podStartSLOduration=5.399967348 podStartE2EDuration="35.885791712s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.711985032 +0000 UTC m=+1219.538547853" lastFinishedPulling="2026-01-28 18:33:59.197809406 +0000 UTC m=+1250.024372217" observedRunningTime="2026-01-28 18:34:00.885577746 +0000 UTC m=+1251.712140567" watchObservedRunningTime="2026-01-28 18:34:00.885791712 +0000 UTC m=+1251.712354533" Jan 28 18:34:01 crc kubenswrapper[4985]: I0128 18:34:01.724764 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" event={"ID":"367b6525-0367-437a-9fe3-b2007411f4af","Type":"ContainerStarted","Data":"62135ee7a2eb606526c37bb8ddcd9bc19db80c6717a626f58c7287903e72ecfa"} Jan 28 18:34:01 crc kubenswrapper[4985]: I0128 18:34:01.727111 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:34:01 crc kubenswrapper[4985]: I0128 18:34:01.748646 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" podStartSLOduration=4.15645547 podStartE2EDuration="36.748623711s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.173017976 +0000 UTC m=+1218.999580797" lastFinishedPulling="2026-01-28 18:34:00.765186217 +0000 UTC m=+1251.591749038" observedRunningTime="2026-01-28 18:34:01.745435151 +0000 UTC m=+1252.571997982" watchObservedRunningTime="2026-01-28 18:34:01.748623711 +0000 UTC m=+1252.575186532" Jan 28 18:34:02 crc kubenswrapper[4985]: I0128 18:34:02.751617 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" event={"ID":"654a2c56-81a7-4b32-ad1d-c4d60b054b47","Type":"ContainerStarted","Data":"1ef3dc985b18a845765f879402221605ba345883a0e78518b5164ff3d2d033a0"} Jan 28 18:34:02 crc kubenswrapper[4985]: I0128 18:34:02.752735 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:34:03 crc kubenswrapper[4985]: I0128 18:34:03.287871 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" podStartSLOduration=4.572693103 podStartE2EDuration="38.287850547s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.171193755 +0000 UTC m=+1218.997756576" lastFinishedPulling="2026-01-28 18:34:01.886351199 +0000 UTC m=+1252.712914020" observedRunningTime="2026-01-28 18:34:02.770781959 +0000 UTC m=+1253.597344780" watchObservedRunningTime="2026-01-28 18:34:03.287850547 +0000 UTC m=+1254.114413368" Jan 28 18:34:05 crc kubenswrapper[4985]: I0128 18:34:05.884048 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:34:05 crc kubenswrapper[4985]: I0128 18:34:05.910006 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:34:05 crc kubenswrapper[4985]: I0128 18:34:05.953747 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:34:06 crc kubenswrapper[4985]: I0128 18:34:06.349598 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:34:06 crc kubenswrapper[4985]: I0128 18:34:06.503063 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:34:06 crc kubenswrapper[4985]: I0128 18:34:06.553168 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:34:06 crc kubenswrapper[4985]: I0128 18:34:06.732019 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:34:07 crc kubenswrapper[4985]: I0128 18:34:07.051673 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:34:07 crc kubenswrapper[4985]: I0128 18:34:07.165759 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:34:07 crc kubenswrapper[4985]: I0128 18:34:07.171105 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:34:07 crc kubenswrapper[4985]: I0128 18:34:07.294562 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:34:09 crc kubenswrapper[4985]: I0128 18:34:09.295015 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:34:10 crc kubenswrapper[4985]: E0128 18:34:10.266529 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.856873 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" event={"ID":"697da6ae-2950-468c-82e9-bcb1a1af61e7","Type":"ContainerStarted","Data":"bff91fc4047ca8cb0c7f5c491bb739bdfbe2ef37ed14ecab78cbc847a02193b4"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.858032 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.859781 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" event={"ID":"873dc5cd-5c8e-417e-b99a-a52dfcfd701b","Type":"ContainerStarted","Data":"6be03048a45e76fc38842b0f2aa2d2749422dcbc025d44c650518ad71eb52fc8"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.860275 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.862602 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" event={"ID":"99b88683-3e0a-4afa-91ab-71feac27fba1","Type":"ContainerStarted","Data":"1929e793821573d3c1a565d61317bcfad5538b41e79ae8732d91df7c5e2173b2"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.863215 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.865465 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" event={"ID":"4dfb4621-d061-4224-8aee-840726565aa3","Type":"ContainerStarted","Data":"80ea51f2e278a8d38f5c2b991ca9c8c9e8dd7d3746d654e3e185b9388c1c038a"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.866038 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.868665 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" event={"ID":"70329607-4bbe-43ad-bb7a-2b62f26af473","Type":"ContainerStarted","Data":"b40c5de86bd5ee489a9235ce7345e2de0ac05a1a4eb0def7135cf083a63627f0"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.869446 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.871208 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" event={"ID":"cc7f29e1-e6e0-45a0-920a-4b18d8204c65","Type":"ContainerStarted","Data":"b4af6b1594b7467f446e940a66763ef0f6b702bf026796c5550c43aad291ee7c"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.871713 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.873152 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" event={"ID":"91971c24-6187-432c-84ba-65dba69b4598","Type":"ContainerStarted","Data":"9d2c97996374895a55b806ee971623886630ad28da6fcc1d054133f6f6157280"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.873353 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.887024 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podStartSLOduration=35.696836597 podStartE2EDuration="45.887004779s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:59.677338914 +0000 UTC m=+1250.503901735" lastFinishedPulling="2026-01-28 18:34:09.867507096 +0000 UTC m=+1260.694069917" observedRunningTime="2026-01-28 18:34:10.876022619 +0000 UTC m=+1261.702585460" watchObservedRunningTime="2026-01-28 18:34:10.887004779 +0000 UTC m=+1261.713567600" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.895408 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" podStartSLOduration=3.5892935489999998 podStartE2EDuration="45.895388866s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:27.569574639 +0000 UTC m=+1218.396137460" lastFinishedPulling="2026-01-28 18:34:09.875669956 +0000 UTC m=+1260.702232777" observedRunningTime="2026-01-28 18:34:10.893741369 +0000 UTC m=+1261.720304190" watchObservedRunningTime="2026-01-28 18:34:10.895388866 +0000 UTC m=+1261.721951687" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.910539 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podStartSLOduration=4.208314645 podStartE2EDuration="45.910510743s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.171965886 +0000 UTC m=+1218.998528707" lastFinishedPulling="2026-01-28 18:34:09.874161984 +0000 UTC m=+1260.700724805" observedRunningTime="2026-01-28 18:34:10.908537557 +0000 UTC m=+1261.735100388" watchObservedRunningTime="2026-01-28 18:34:10.910510743 +0000 UTC m=+1261.737073564" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.934082 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" podStartSLOduration=35.513412719 podStartE2EDuration="45.934060727s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:59.446204809 +0000 UTC m=+1250.272767630" lastFinishedPulling="2026-01-28 18:34:09.866852807 +0000 UTC m=+1260.693415638" observedRunningTime="2026-01-28 18:34:10.933384898 +0000 UTC m=+1261.759947719" watchObservedRunningTime="2026-01-28 18:34:10.934060727 +0000 UTC m=+1261.760623548" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.952033 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" podStartSLOduration=4.252199074 podStartE2EDuration="45.952016374s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.172648126 +0000 UTC m=+1218.999210947" lastFinishedPulling="2026-01-28 18:34:09.872465426 +0000 UTC m=+1260.699028247" observedRunningTime="2026-01-28 18:34:10.950393869 +0000 UTC m=+1261.776956700" watchObservedRunningTime="2026-01-28 18:34:10.952016374 +0000 UTC m=+1261.778579195" Jan 28 18:34:11 crc kubenswrapper[4985]: I0128 18:34:11.005827 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podStartSLOduration=3.69946772 podStartE2EDuration="46.005803653s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:27.569440526 +0000 UTC m=+1218.396003347" lastFinishedPulling="2026-01-28 18:34:09.875776469 +0000 UTC m=+1260.702339280" observedRunningTime="2026-01-28 18:34:10.984144411 +0000 UTC m=+1261.810707232" watchObservedRunningTime="2026-01-28 18:34:11.005803653 +0000 UTC m=+1261.832366474" Jan 28 18:34:11 crc kubenswrapper[4985]: I0128 18:34:11.006991 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podStartSLOduration=4.590297669 podStartE2EDuration="46.006983396s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.456416307 +0000 UTC m=+1219.282979128" lastFinishedPulling="2026-01-28 18:34:09.873102044 +0000 UTC m=+1260.699664855" observedRunningTime="2026-01-28 18:34:10.999738272 +0000 UTC m=+1261.826301093" watchObservedRunningTime="2026-01-28 18:34:11.006983396 +0000 UTC m=+1261.833546217" Jan 28 18:34:12 crc kubenswrapper[4985]: E0128 18:34:12.266837 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" Jan 28 18:34:12 crc kubenswrapper[4985]: I0128 18:34:12.889767 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" event={"ID":"b5a0c28d-1434-40f0-8759-d76b65dc2c30","Type":"ContainerStarted","Data":"11f64e6924e35c8dac9934d956caaaa9c36e16ee58665f9b1149145a0715d500"} Jan 28 18:34:12 crc kubenswrapper[4985]: I0128 18:34:12.906053 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" podStartSLOduration=4.177413273 podStartE2EDuration="47.90602914s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.179916111 +0000 UTC m=+1219.006478942" lastFinishedPulling="2026-01-28 18:34:11.908531998 +0000 UTC m=+1262.735094809" observedRunningTime="2026-01-28 18:34:12.905577927 +0000 UTC m=+1263.732140758" watchObservedRunningTime="2026-01-28 18:34:12.90602914 +0000 UTC m=+1263.732591971" Jan 28 18:34:15 crc kubenswrapper[4985]: I0128 18:34:15.914959 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" event={"ID":"38846228-cec9-4a59-b9bb-c766121dacde","Type":"ContainerStarted","Data":"e3fa9329be40e8e7c004d6aea5bd6091de66c9c6bb481177d817723d553d5c05"} Jan 28 18:34:15 crc kubenswrapper[4985]: I0128 18:34:15.934038 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:34:15 crc kubenswrapper[4985]: I0128 18:34:15.935782 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" podStartSLOduration=3.659507582 podStartE2EDuration="49.935761366s" podCreationTimestamp="2026-01-28 18:33:26 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.689270521 +0000 UTC m=+1219.515833342" lastFinishedPulling="2026-01-28 18:34:14.965524305 +0000 UTC m=+1265.792087126" observedRunningTime="2026-01-28 18:34:15.926913636 +0000 UTC m=+1266.753476467" watchObservedRunningTime="2026-01-28 18:34:15.935761366 +0000 UTC m=+1266.762324207" Jan 28 18:34:15 crc kubenswrapper[4985]: I0128 18:34:15.969185 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:34:16 crc kubenswrapper[4985]: I0128 18:34:16.138057 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:34:16 crc kubenswrapper[4985]: I0128 18:34:16.374279 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:34:16 crc kubenswrapper[4985]: I0128 18:34:16.382612 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:34:16 crc kubenswrapper[4985]: I0128 18:34:16.513775 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:34:17 crc kubenswrapper[4985]: I0128 18:34:17.147626 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:34:17 crc kubenswrapper[4985]: I0128 18:34:17.965279 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:34:21 crc kubenswrapper[4985]: I0128 18:34:21.969760 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" event={"ID":"1310770f-7cb7-4874-b2a0-4ef733911716","Type":"ContainerStarted","Data":"6e92c8c3af43ff2712b0f8ed60df9fc8862bc534e5395b1207bb47f744084f5b"} Jan 28 18:34:21 crc kubenswrapper[4985]: I0128 18:34:21.970509 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:34:21 crc kubenswrapper[4985]: I0128 18:34:21.987768 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podStartSLOduration=2.9858238139999997 podStartE2EDuration="55.987750517s" podCreationTimestamp="2026-01-28 18:33:26 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.701374363 +0000 UTC m=+1219.527937184" lastFinishedPulling="2026-01-28 18:34:21.703301066 +0000 UTC m=+1272.529863887" observedRunningTime="2026-01-28 18:34:21.983180688 +0000 UTC m=+1272.809743519" watchObservedRunningTime="2026-01-28 18:34:21.987750517 +0000 UTC m=+1272.814313348" Jan 28 18:34:22 crc kubenswrapper[4985]: I0128 18:34:22.643418 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:34:23 crc kubenswrapper[4985]: I0128 18:34:23.986434 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" event={"ID":"d4d6e990-839d-4186-9382-1a67922556df","Type":"ContainerStarted","Data":"63ac9ba384926938b30ecfda1c6080eb12ddc04d1c11ca3a283a65a2c51b023d"} Jan 28 18:34:23 crc kubenswrapper[4985]: I0128 18:34:23.987021 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:34:24 crc kubenswrapper[4985]: I0128 18:34:24.013803 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podStartSLOduration=2.983333863 podStartE2EDuration="58.013777886s" podCreationTimestamp="2026-01-28 18:33:26 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.719101703 +0000 UTC m=+1219.545664534" lastFinishedPulling="2026-01-28 18:34:23.749545726 +0000 UTC m=+1274.576108557" observedRunningTime="2026-01-28 18:34:24.008094346 +0000 UTC m=+1274.834657167" watchObservedRunningTime="2026-01-28 18:34:24.013777886 +0000 UTC m=+1274.840340717" Jan 28 18:34:26 crc kubenswrapper[4985]: I0128 18:34:26.384827 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:34:27 crc kubenswrapper[4985]: I0128 18:34:27.311090 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:34:37 crc kubenswrapper[4985]: I0128 18:34:37.373685 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.944684 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z95qg"] Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.946636 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.952804 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.953076 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.953423 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-sgpwf" Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.953603 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.958506 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z95qg"] Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.015153 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x78r6"] Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.018410 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.022438 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.029472 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x78r6"] Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.092542 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zspj\" (UniqueName: \"kubernetes.io/projected/d572008e-db0e-44d1-af83-a8c9a7f2cf48-kube-api-access-7zspj\") pod \"dnsmasq-dns-675f4bcbfc-z95qg\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.092759 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d572008e-db0e-44d1-af83-a8c9a7f2cf48-config\") pod \"dnsmasq-dns-675f4bcbfc-z95qg\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.194823 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zspj\" (UniqueName: \"kubernetes.io/projected/d572008e-db0e-44d1-af83-a8c9a7f2cf48-kube-api-access-7zspj\") pod \"dnsmasq-dns-675f4bcbfc-z95qg\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.194968 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.195069 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwbpd\" (UniqueName: \"kubernetes.io/projected/d902791c-2d1f-4c1d-9351-6ef3788b3b77-kube-api-access-zwbpd\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.195116 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-config\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.195165 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d572008e-db0e-44d1-af83-a8c9a7f2cf48-config\") pod \"dnsmasq-dns-675f4bcbfc-z95qg\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.196018 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d572008e-db0e-44d1-af83-a8c9a7f2cf48-config\") pod \"dnsmasq-dns-675f4bcbfc-z95qg\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.225909 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zspj\" (UniqueName: \"kubernetes.io/projected/d572008e-db0e-44d1-af83-a8c9a7f2cf48-kube-api-access-7zspj\") pod \"dnsmasq-dns-675f4bcbfc-z95qg\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.278940 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.297493 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.297605 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwbpd\" (UniqueName: \"kubernetes.io/projected/d902791c-2d1f-4c1d-9351-6ef3788b3b77-kube-api-access-zwbpd\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.297639 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-config\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.298898 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-config\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.299778 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.321457 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwbpd\" (UniqueName: \"kubernetes.io/projected/d902791c-2d1f-4c1d-9351-6ef3788b3b77-kube-api-access-zwbpd\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.337965 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.823662 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z95qg"] Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.839714 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.939421 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x78r6"] Jan 28 18:34:55 crc kubenswrapper[4985]: W0128 18:34:55.940405 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd902791c_2d1f_4c1d_9351_6ef3788b3b77.slice/crio-726d39ad443f4cf7528eaa7e16886673ba8250d6c2d954f18e44637adfce94f5 WatchSource:0}: Error finding container 726d39ad443f4cf7528eaa7e16886673ba8250d6c2d954f18e44637adfce94f5: Status 404 returned error can't find the container with id 726d39ad443f4cf7528eaa7e16886673ba8250d6c2d954f18e44637adfce94f5 Jan 28 18:34:56 crc kubenswrapper[4985]: I0128 18:34:56.328853 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" event={"ID":"d902791c-2d1f-4c1d-9351-6ef3788b3b77","Type":"ContainerStarted","Data":"726d39ad443f4cf7528eaa7e16886673ba8250d6c2d954f18e44637adfce94f5"} Jan 28 18:34:56 crc kubenswrapper[4985]: I0128 18:34:56.331408 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" event={"ID":"d572008e-db0e-44d1-af83-a8c9a7f2cf48","Type":"ContainerStarted","Data":"63e8d84c0aba56aa3512a4ac1c8f628871da4e22c66d7cefbfe1bef6df1c6884"} Jan 28 18:34:57 crc kubenswrapper[4985]: I0128 18:34:57.984743 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z95qg"] Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.027360 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ndmmr"] Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.028992 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.040753 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ndmmr"] Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.180725 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cthrq\" (UniqueName: \"kubernetes.io/projected/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-kube-api-access-cthrq\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.180880 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-config\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.181123 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.282356 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-config\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.282707 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.282849 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cthrq\" (UniqueName: \"kubernetes.io/projected/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-kube-api-access-cthrq\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.284082 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.284085 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-config\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.319394 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cthrq\" (UniqueName: \"kubernetes.io/projected/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-kube-api-access-cthrq\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.374169 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.451170 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x78r6"] Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.469004 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2ltmw"] Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.473998 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.512782 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2ltmw"] Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.598055 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.598108 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-config\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.598131 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwhbp\" (UniqueName: \"kubernetes.io/projected/ee74e7b2-a80e-4390-afec-a13db1b25da2-kube-api-access-qwhbp\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.705854 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.706456 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-config\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.706496 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwhbp\" (UniqueName: \"kubernetes.io/projected/ee74e7b2-a80e-4390-afec-a13db1b25da2-kube-api-access-qwhbp\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.707355 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.707844 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-config\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.727839 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwhbp\" (UniqueName: \"kubernetes.io/projected/ee74e7b2-a80e-4390-afec-a13db1b25da2-kube-api-access-qwhbp\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.890101 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:59 crc kubenswrapper[4985]: W0128 18:34:59.078922 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bd09ad3_e6d8_4ee9_b465_139f6de0ae5c.slice/crio-3c5466552d205ed11bf957206c330067f0b5fafb2460f8946f1184b0e9c10d6b WatchSource:0}: Error finding container 3c5466552d205ed11bf957206c330067f0b5fafb2460f8946f1184b0e9c10d6b: Status 404 returned error can't find the container with id 3c5466552d205ed11bf957206c330067f0b5fafb2460f8946f1184b0e9c10d6b Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.087077 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ndmmr"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.183866 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.186152 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.191184 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.191481 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.191201 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.191549 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8vf7j" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.191862 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.192077 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.192212 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.199166 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.202291 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.211615 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.222699 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.225187 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.233769 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.248497 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368215 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9549037f-5867-44ac-86dc-a02105e4c414-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368274 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9549037f-5867-44ac-86dc-a02105e4c414-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368363 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368386 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368402 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmbb\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-kube-api-access-pdmbb\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368416 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4mrw\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-kube-api-access-r4mrw\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368433 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/313d3857-140a-4a66-8329-12453fc8dd4c-pod-info\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368450 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-server-conf\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368543 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368588 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368604 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t6vc\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-kube-api-access-7t6vc\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368705 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-config-data\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368734 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368751 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368925 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.369024 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.369243 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370028 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370072 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370098 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370177 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370222 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370237 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-config-data\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370288 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370307 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/313d3857-140a-4a66-8329-12453fc8dd4c-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370322 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370390 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370406 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370450 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370471 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.372312 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.372394 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.426691 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" event={"ID":"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c","Type":"ContainerStarted","Data":"3c5466552d205ed11bf957206c330067f0b5fafb2460f8946f1184b0e9c10d6b"} Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.472583 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2ltmw"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.473782 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.473851 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.473872 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.473888 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t6vc\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-kube-api-access-7t6vc\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.473921 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-config-data\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.473988 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474008 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474055 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474089 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474124 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474147 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474162 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474179 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474225 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474243 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-config-data\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474289 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474310 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474326 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/313d3857-140a-4a66-8329-12453fc8dd4c-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474361 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474379 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474393 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474453 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474475 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474491 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474479 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474509 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474607 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9549037f-5867-44ac-86dc-a02105e4c414-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474643 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9549037f-5867-44ac-86dc-a02105e4c414-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474713 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474749 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474774 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmbb\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-kube-api-access-pdmbb\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474803 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4mrw\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-kube-api-access-r4mrw\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474828 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/313d3857-140a-4a66-8329-12453fc8dd4c-pod-info\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474878 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-server-conf\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.476400 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.476791 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.476857 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-server-conf\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.478625 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.479230 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-config-data\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.479534 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.480071 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.480110 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.480107 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ce250563889cf210f76b1961aa7444b8cbe0d3f306db896236b924f9bdc2ed03/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.480107 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.480613 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.481613 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-config-data\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.482271 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.483993 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.484775 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.485124 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.485309 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.485506 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/313d3857-140a-4a66-8329-12453fc8dd4c-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.485793 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.488728 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9549037f-5867-44ac-86dc-a02105e4c414-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.490629 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/313d3857-140a-4a66-8329-12453fc8dd4c-pod-info\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.491746 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.492509 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.491168 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.497015 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t6vc\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-kube-api-access-7t6vc\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.504981 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.505031 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3c775c7dad0eb68939020e6ac39de7a8b8505e50517c4739aca512474a1c5503/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.505113 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.505205 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/18da3f6437b5d54d0b067e2370e468c4fc3f3bb8be36828902e2b198f7e21ef1/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.512234 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.512451 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4mrw\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-kube-api-access-r4mrw\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.512961 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmbb\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-kube-api-access-pdmbb\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.512972 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: W0128 18:34:59.521087 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee74e7b2_a80e_4390_afec_a13db1b25da2.slice/crio-31619f9163f0c27ee787dc3b6d91d67625b016d70dc4088ba8f6f0161f7d8376 WatchSource:0}: Error finding container 31619f9163f0c27ee787dc3b6d91d67625b016d70dc4088ba8f6f0161f7d8376: Status 404 returned error can't find the container with id 31619f9163f0c27ee787dc3b6d91d67625b016d70dc4088ba8f6f0161f7d8376 Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.522959 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9549037f-5867-44ac-86dc-a02105e4c414-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.527461 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.577115 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.590430 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.612485 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.613164 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.619916 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.621731 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.624829 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.624976 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.625112 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-zs2dp" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.626334 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.627538 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.627770 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.635521 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684614 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684662 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684690 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684714 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684732 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/41c1858c-ad6e-441f-b998-c57290cc5d68-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684748 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8ql\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-kube-api-access-td8ql\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684972 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.685031 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.685079 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/41c1858c-ad6e-441f-b998-c57290cc5d68-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.685131 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.685167 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789435 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789480 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789509 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789539 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/41c1858c-ad6e-441f-b998-c57290cc5d68-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789566 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td8ql\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-kube-api-access-td8ql\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789601 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789638 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789665 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/41c1858c-ad6e-441f-b998-c57290cc5d68-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789714 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789752 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789945 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.790424 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.790945 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.796550 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.796598 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac8bde78162f1032f95f647174ef8183aa4e0f86240347c6b6b8d4a86e7076a1/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.798128 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.798402 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.800540 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.800928 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/41c1858c-ad6e-441f-b998-c57290cc5d68-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.801096 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.812606 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/41c1858c-ad6e-441f-b998-c57290cc5d68-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.828979 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td8ql\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-kube-api-access-td8ql\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.832673 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.845947 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.859279 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.873834 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.976815 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.396945 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:35:00 crc kubenswrapper[4985]: W0128 18:35:00.398217 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a4c48be_3f2f_4c2d_a0ba_2084caf7c541.slice/crio-210b9569d6c0ecf168f35cbf15fa409f7c78272e84c7d067b7d52ec043eaaf23 WatchSource:0}: Error finding container 210b9569d6c0ecf168f35cbf15fa409f7c78272e84c7d067b7d52ec043eaaf23: Status 404 returned error can't find the container with id 210b9569d6c0ecf168f35cbf15fa409f7c78272e84c7d067b7d52ec043eaaf23 Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.440188 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" event={"ID":"ee74e7b2-a80e-4390-afec-a13db1b25da2","Type":"ContainerStarted","Data":"31619f9163f0c27ee787dc3b6d91d67625b016d70dc4088ba8f6f0161f7d8376"} Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.442911 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541","Type":"ContainerStarted","Data":"210b9569d6c0ecf168f35cbf15fa409f7c78272e84c7d067b7d52ec043eaaf23"} Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.599910 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.599964 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.722100 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.726179 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.733910 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.745846 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2mt89" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.745851 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.746057 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.750389 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.753436 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.814036 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.823621 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-kolla-config\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.823933 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-config-data-default\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.824020 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.824065 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.824106 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-08864e67-424b-4807-88e5-3a7a74922802\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08864e67-424b-4807-88e5-3a7a74922802\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.824126 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.824269 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.824294 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jzqz\" (UniqueName: \"kubernetes.io/projected/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-kube-api-access-9jzqz\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926219 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926305 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-08864e67-424b-4807-88e5-3a7a74922802\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08864e67-424b-4807-88e5-3a7a74922802\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926333 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926368 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926391 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jzqz\" (UniqueName: \"kubernetes.io/projected/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-kube-api-access-9jzqz\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926408 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-kolla-config\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926447 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-config-data-default\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926768 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.928514 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-kolla-config\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.929788 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.931520 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-config-data-default\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.931805 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.937759 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.940809 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.940855 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-08864e67-424b-4807-88e5-3a7a74922802\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08864e67-424b-4807-88e5-3a7a74922802\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/241736b2c687c815404498b1a703eac59b60363755cc372daf663a1193acdcd8/globalmount\"" pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.950788 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jzqz\" (UniqueName: \"kubernetes.io/projected/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-kube-api-access-9jzqz\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.964309 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.991243 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-08864e67-424b-4807-88e5-3a7a74922802\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08864e67-424b-4807-88e5-3a7a74922802\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.076483 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.463988 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"41c1858c-ad6e-441f-b998-c57290cc5d68","Type":"ContainerStarted","Data":"f0ff3c53025b9ae422df2e7cccc0ec25b7dd495fd74546696ee043e91187bb41"} Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.469920 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"313d3857-140a-4a66-8329-12453fc8dd4c","Type":"ContainerStarted","Data":"17211bf5e9b8b8c383ea958cf8ed251d1d40c28a9c6c3e8e814a8d59072a3363"} Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.475623 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9549037f-5867-44ac-86dc-a02105e4c414","Type":"ContainerStarted","Data":"3743df7761e9f95626d5189d3a604fc7ae4f9d57706f392ce36c256fb508d124"} Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.947879 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.951904 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.955651 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.957561 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-z2wcg" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.957600 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.959222 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.962984 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057368 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lkr7\" (UniqueName: \"kubernetes.io/projected/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-kube-api-access-5lkr7\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057522 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057575 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057603 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057632 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-24245c8a-20b3-4600-8192-4628487d4a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24245c8a-20b3-4600-8192-4628487d4a9e\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057678 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057726 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057756 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.163994 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164103 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164158 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164197 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-24245c8a-20b3-4600-8192-4628487d4a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24245c8a-20b3-4600-8192-4628487d4a9e\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164316 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164396 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164434 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164532 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lkr7\" (UniqueName: \"kubernetes.io/projected/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-kube-api-access-5lkr7\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.165797 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.165907 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.166098 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.168766 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.172904 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.172964 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-24245c8a-20b3-4600-8192-4628487d4a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24245c8a-20b3-4600-8192-4628487d4a9e\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7a286e86a0ff5e9358de4d53c455c6c79dae9dce989e12f65d2f3cc31213a936/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.189413 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.192354 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.212978 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lkr7\" (UniqueName: \"kubernetes.io/projected/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-kube-api-access-5lkr7\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.233697 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.235049 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.244123 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.247778 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.247801 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.248019 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5tbcp" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.267341 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkfwp\" (UniqueName: \"kubernetes.io/projected/88fe31db-8414-43ac-b547-fa0278d9508f-kube-api-access-wkfwp\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.267493 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88fe31db-8414-43ac-b547-fa0278d9508f-config-data\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.267528 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fe31db-8414-43ac-b547-fa0278d9508f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.267567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fe31db-8414-43ac-b547-fa0278d9508f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.267619 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88fe31db-8414-43ac-b547-fa0278d9508f-kolla-config\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.321815 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-24245c8a-20b3-4600-8192-4628487d4a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24245c8a-20b3-4600-8192-4628487d4a9e\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.370545 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88fe31db-8414-43ac-b547-fa0278d9508f-config-data\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.370636 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fe31db-8414-43ac-b547-fa0278d9508f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.370699 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fe31db-8414-43ac-b547-fa0278d9508f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.370756 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88fe31db-8414-43ac-b547-fa0278d9508f-kolla-config\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.370846 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkfwp\" (UniqueName: \"kubernetes.io/projected/88fe31db-8414-43ac-b547-fa0278d9508f-kube-api-access-wkfwp\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.371429 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88fe31db-8414-43ac-b547-fa0278d9508f-config-data\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.372017 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88fe31db-8414-43ac-b547-fa0278d9508f-kolla-config\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.380067 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fe31db-8414-43ac-b547-fa0278d9508f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.381890 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fe31db-8414-43ac-b547-fa0278d9508f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.398376 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkfwp\" (UniqueName: \"kubernetes.io/projected/88fe31db-8414-43ac-b547-fa0278d9508f-kube-api-access-wkfwp\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.582683 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.639374 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 18:35:03 crc kubenswrapper[4985]: I0128 18:35:03.999374 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.086411 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 18:35:04 crc kubenswrapper[4985]: W0128 18:35:04.327861 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8253e52_6b52_45a9_b5d6_680d3dfbebe7.slice/crio-fed6a9175dbcd89ccf358589cad8420ffa9ad9b8667625a1ebb22a73b6a06466 WatchSource:0}: Error finding container fed6a9175dbcd89ccf358589cad8420ffa9ad9b8667625a1ebb22a73b6a06466: Status 404 returned error can't find the container with id fed6a9175dbcd89ccf358589cad8420ffa9ad9b8667625a1ebb22a73b6a06466 Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.329181 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.520695 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8","Type":"ContainerStarted","Data":"b64358d999fa9ab8443bf574a2dc6823b1bf3a2469dbeb9c4025c7e9703bfeed"} Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.522883 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"88fe31db-8414-43ac-b547-fa0278d9508f","Type":"ContainerStarted","Data":"9edcc6df9d4b2dc184587b9332b5a60759478281c8d2ebea39c78338aaa4ce36"} Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.524127 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8253e52-6b52-45a9-b5d6-680d3dfbebe7","Type":"ContainerStarted","Data":"fed6a9175dbcd89ccf358589cad8420ffa9ad9b8667625a1ebb22a73b6a06466"} Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.765177 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.766425 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.781006 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.781349 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-h7kgr" Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.824232 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45mg6\" (UniqueName: \"kubernetes.io/projected/b4b8dd73-ff4d-44d3-b30f-a994e993392d-kube-api-access-45mg6\") pod \"kube-state-metrics-0\" (UID: \"b4b8dd73-ff4d-44d3-b30f-a994e993392d\") " pod="openstack/kube-state-metrics-0" Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.925947 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45mg6\" (UniqueName: \"kubernetes.io/projected/b4b8dd73-ff4d-44d3-b30f-a994e993392d-kube-api-access-45mg6\") pod \"kube-state-metrics-0\" (UID: \"b4b8dd73-ff4d-44d3-b30f-a994e993392d\") " pod="openstack/kube-state-metrics-0" Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.963465 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45mg6\" (UniqueName: \"kubernetes.io/projected/b4b8dd73-ff4d-44d3-b30f-a994e993392d-kube-api-access-45mg6\") pod \"kube-state-metrics-0\" (UID: \"b4b8dd73-ff4d-44d3-b30f-a994e993392d\") " pod="openstack/kube-state-metrics-0" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.126376 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.611957 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn"] Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.615622 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.622641 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-7x8tl" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.623560 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.631371 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn"] Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.670727 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcmrr\" (UniqueName: \"kubernetes.io/projected/c9b84394-02f1-4bde-befe-a2a649925c93-kube-api-access-lcmrr\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.670831 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9b84394-02f1-4bde-befe-a2a649925c93-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.760024 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.779566 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcmrr\" (UniqueName: \"kubernetes.io/projected/c9b84394-02f1-4bde-befe-a2a649925c93-kube-api-access-lcmrr\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.779722 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9b84394-02f1-4bde-befe-a2a649925c93-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:05 crc kubenswrapper[4985]: E0128 18:35:05.779862 4985 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 28 18:35:05 crc kubenswrapper[4985]: E0128 18:35:05.779913 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9b84394-02f1-4bde-befe-a2a649925c93-serving-cert podName:c9b84394-02f1-4bde-befe-a2a649925c93 nodeName:}" failed. No retries permitted until 2026-01-28 18:35:06.27989722 +0000 UTC m=+1317.106460041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c9b84394-02f1-4bde-befe-a2a649925c93-serving-cert") pod "observability-ui-dashboards-66cbf594b5-5w5dn" (UID: "c9b84394-02f1-4bde-befe-a2a649925c93") : secret "observability-ui-dashboards" not found Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.803474 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcmrr\" (UniqueName: \"kubernetes.io/projected/c9b84394-02f1-4bde-befe-a2a649925c93-kube-api-access-lcmrr\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.990726 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-74779d9b4-2xxwx"] Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.000386 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.030479 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74779d9b4-2xxwx"] Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086570 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4gpl\" (UniqueName: \"kubernetes.io/projected/6b348b0a-4b9a-4216-adbf-02bcefe1f011-kube-api-access-t4gpl\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086644 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-service-ca\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086705 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-serving-cert\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086731 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-oauth-config\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086784 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-oauth-serving-cert\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086844 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-trusted-ca-bundle\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086882 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-config\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.188888 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-oauth-serving-cert\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.189009 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-trusted-ca-bundle\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.189059 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-config\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.189080 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gpl\" (UniqueName: \"kubernetes.io/projected/6b348b0a-4b9a-4216-adbf-02bcefe1f011-kube-api-access-t4gpl\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.189104 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-service-ca\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.189153 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-serving-cert\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.189182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-oauth-config\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.191373 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-service-ca\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.191828 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-oauth-serving-cert\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.191853 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-trusted-ca-bundle\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.193814 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-config\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.196338 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-serving-cert\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.206651 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4gpl\" (UniqueName: \"kubernetes.io/projected/6b348b0a-4b9a-4216-adbf-02bcefe1f011-kube-api-access-t4gpl\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.215473 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-oauth-config\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.290704 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9b84394-02f1-4bde-befe-a2a649925c93-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.294229 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9b84394-02f1-4bde-befe-a2a649925c93-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.325513 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.538501 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.568785 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4b8dd73-ff4d-44d3-b30f-a994e993392d","Type":"ContainerStarted","Data":"ec024b4a882b8b962648e5e1cddea01209414bd2598d2c9c73886bd738d4ea3d"} Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.961636 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.967967 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.971923 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-wj229" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.972178 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.972589 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.972188 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.973365 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.973886 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.973949 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.988692 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.012786 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115273 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115611 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115691 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115745 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96162e6f-966d-438d-9362-ef03abc4b277-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115782 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115815 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv7d7\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-kube-api-access-gv7d7\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115858 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115886 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-config\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115920 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115950 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.218525 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.218640 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96162e6f-966d-438d-9362-ef03abc4b277-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.218670 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.218713 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv7d7\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-kube-api-access-gv7d7\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.218741 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.219207 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-config\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.219237 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.220134 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.220340 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.220390 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.220609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.222829 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.222905 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96162e6f-966d-438d-9362-ef03abc4b277-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.223188 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.224532 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.224569 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/48fd35393a2bd67e182a1b8f0b6bc712b43ce2f1ef21a21dd138faec48abf12b/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.224623 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-config\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.226403 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.226876 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.237209 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.239216 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv7d7\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-kube-api-access-gv7d7\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.276931 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.319172 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.840823 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9r84t"] Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.842769 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.845978 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.846297 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.846687 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6gpkf" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.853341 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-f287q"] Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.855991 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.872066 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9r84t"] Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.893631 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f287q"] Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940189 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw4sw\" (UniqueName: \"kubernetes.io/projected/2d1c1ab5-7e43-47cd-8218-3d945574a79c-kube-api-access-tw4sw\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940288 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-run-ovn\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940502 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-run\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940628 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-log\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940698 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d1c1ab5-7e43-47cd-8218-3d945574a79c-scripts\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940738 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m94j6\" (UniqueName: \"kubernetes.io/projected/2c181f14-26b7-49f4-9ae0-869d9b291938-kube-api-access-m94j6\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940883 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-lib\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940985 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-run\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.941063 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-etc-ovs\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.941204 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c181f14-26b7-49f4-9ae0-869d9b291938-scripts\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.941269 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1c1ab5-7e43-47cd-8218-3d945574a79c-ovn-controller-tls-certs\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.941334 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-log-ovn\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.941356 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1c1ab5-7e43-47cd-8218-3d945574a79c-combined-ca-bundle\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043717 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw4sw\" (UniqueName: \"kubernetes.io/projected/2d1c1ab5-7e43-47cd-8218-3d945574a79c-kube-api-access-tw4sw\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043815 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-run-ovn\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043873 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-run\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043914 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-log\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043951 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d1c1ab5-7e43-47cd-8218-3d945574a79c-scripts\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043972 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m94j6\" (UniqueName: \"kubernetes.io/projected/2c181f14-26b7-49f4-9ae0-869d9b291938-kube-api-access-m94j6\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043993 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-lib\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044015 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-run\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044046 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-etc-ovs\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044093 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c181f14-26b7-49f4-9ae0-869d9b291938-scripts\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044110 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1c1ab5-7e43-47cd-8218-3d945574a79c-ovn-controller-tls-certs\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044127 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-log-ovn\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044143 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1c1ab5-7e43-47cd-8218-3d945574a79c-combined-ca-bundle\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044808 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-lib\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044999 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-log\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.045078 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-run-ovn\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.045142 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-log-ovn\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.045349 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-run\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.045366 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-run\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.045433 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-etc-ovs\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.047596 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d1c1ab5-7e43-47cd-8218-3d945574a79c-scripts\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.054502 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c181f14-26b7-49f4-9ae0-869d9b291938-scripts\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.063001 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1c1ab5-7e43-47cd-8218-3d945574a79c-ovn-controller-tls-certs\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.065552 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1c1ab5-7e43-47cd-8218-3d945574a79c-combined-ca-bundle\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.066001 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw4sw\" (UniqueName: \"kubernetes.io/projected/2d1c1ab5-7e43-47cd-8218-3d945574a79c-kube-api-access-tw4sw\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.066919 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m94j6\" (UniqueName: \"kubernetes.io/projected/2c181f14-26b7-49f4-9ae0-869d9b291938-kube-api-access-m94j6\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.176229 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.195181 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.725681 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.728271 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.731437 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.731506 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.731453 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.731712 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.731814 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-zsvtp" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.751739 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865403 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865488 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865536 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1c7625-25e1-442f-9f71-5d2a9323306c-config\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865670 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865789 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865887 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e1c7625-25e1-442f-9f71-5d2a9323306c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865970 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvt6c\" (UniqueName: \"kubernetes.io/projected/6e1c7625-25e1-442f-9f71-5d2a9323306c-kube-api-access-jvt6c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.866001 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e1c7625-25e1-442f-9f71-5d2a9323306c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.968423 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972096 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e1c7625-25e1-442f-9f71-5d2a9323306c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972160 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvt6c\" (UniqueName: \"kubernetes.io/projected/6e1c7625-25e1-442f-9f71-5d2a9323306c-kube-api-access-jvt6c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972189 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e1c7625-25e1-442f-9f71-5d2a9323306c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972474 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972541 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972590 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1c7625-25e1-442f-9f71-5d2a9323306c-config\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972632 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.974406 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e1c7625-25e1-442f-9f71-5d2a9323306c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.974974 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1c7625-25e1-442f-9f71-5d2a9323306c-config\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.974993 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.975743 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e1c7625-25e1-442f-9f71-5d2a9323306c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.982776 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.983004 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.991591 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvt6c\" (UniqueName: \"kubernetes.io/projected/6e1c7625-25e1-442f-9f71-5d2a9323306c-kube-api-access-jvt6c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.992975 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.993043 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1577e25a4d037b9f1fe65c5cf6da4068d3343b1c98128ca48e5b0ea8ceecf297/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:09 crc kubenswrapper[4985]: I0128 18:35:09.041785 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:09 crc kubenswrapper[4985]: I0128 18:35:09.066453 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.185936 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.186546 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.480868 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.483387 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.486838 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.486953 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.487203 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-nvkdc" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.487487 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.493292 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630323 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630377 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630402 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630533 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630597 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630732 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-config\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630760 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drgds\" (UniqueName: \"kubernetes.io/projected/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-kube-api-access-drgds\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630787 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732163 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732225 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732370 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-config\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732388 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drgds\" (UniqueName: \"kubernetes.io/projected/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-kube-api-access-drgds\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732409 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732437 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732486 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732511 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.733672 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.734160 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.734867 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-config\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.739230 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.739587 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.739720 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.739762 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/54c46588aa336c2bb13d151debfea516f5088415e77b1327372dc864ad111bd2/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.740225 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.751805 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drgds\" (UniqueName: \"kubernetes.io/projected/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-kube-api-access-drgds\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.770937 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.818784 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:19 crc kubenswrapper[4985]: I0128 18:35:19.762995 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74779d9b4-2xxwx"] Jan 28 18:35:22 crc kubenswrapper[4985]: E0128 18:35:22.666720 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 18:35:22 crc kubenswrapper[4985]: E0128 18:35:22.667227 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zspj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-z95qg_openstack(d572008e-db0e-44d1-af83-a8c9a7f2cf48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:35:22 crc kubenswrapper[4985]: E0128 18:35:22.668935 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" podUID="d572008e-db0e-44d1-af83-a8c9a7f2cf48" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.449541 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.450119 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwbpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-x78r6_openstack(d902791c-2d1f-4c1d-9351-6ef3788b3b77): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.451405 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" podUID="d902791c-2d1f-4c1d-9351-6ef3788b3b77" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.491583 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.491749 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cthrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-ndmmr_openstack(1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.493151 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" podUID="1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.515220 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.515431 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwhbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-2ltmw_openstack(ee74e7b2-a80e-4390-afec-a13db1b25da2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.516623 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" podUID="ee74e7b2-a80e-4390-afec-a13db1b25da2" Jan 28 18:35:23 crc kubenswrapper[4985]: W0128 18:35:23.561300 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b348b0a_4b9a_4216_adbf_02bcefe1f011.slice/crio-866f63d57e390eecef2b103a7c3da56e9b87c70bdffada6f5f86f4e18918897d WatchSource:0}: Error finding container 866f63d57e390eecef2b103a7c3da56e9b87c70bdffada6f5f86f4e18918897d: Status 404 returned error can't find the container with id 866f63d57e390eecef2b103a7c3da56e9b87c70bdffada6f5f86f4e18918897d Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.703541 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.729376 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" event={"ID":"d572008e-db0e-44d1-af83-a8c9a7f2cf48","Type":"ContainerDied","Data":"63e8d84c0aba56aa3512a4ac1c8f628871da4e22c66d7cefbfe1bef6df1c6884"} Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.729529 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.734819 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74779d9b4-2xxwx" event={"ID":"6b348b0a-4b9a-4216-adbf-02bcefe1f011","Type":"ContainerStarted","Data":"866f63d57e390eecef2b103a7c3da56e9b87c70bdffada6f5f86f4e18918897d"} Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.736445 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" podUID="ee74e7b2-a80e-4390-afec-a13db1b25da2" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.736828 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" podUID="1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c" Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.787607 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zspj\" (UniqueName: \"kubernetes.io/projected/d572008e-db0e-44d1-af83-a8c9a7f2cf48-kube-api-access-7zspj\") pod \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.787788 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d572008e-db0e-44d1-af83-a8c9a7f2cf48-config\") pod \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.791412 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d572008e-db0e-44d1-af83-a8c9a7f2cf48-config" (OuterVolumeSpecName: "config") pod "d572008e-db0e-44d1-af83-a8c9a7f2cf48" (UID: "d572008e-db0e-44d1-af83-a8c9a7f2cf48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.801581 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d572008e-db0e-44d1-af83-a8c9a7f2cf48-kube-api-access-7zspj" (OuterVolumeSpecName: "kube-api-access-7zspj") pod "d572008e-db0e-44d1-af83-a8c9a7f2cf48" (UID: "d572008e-db0e-44d1-af83-a8c9a7f2cf48"). InnerVolumeSpecName "kube-api-access-7zspj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.890380 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d572008e-db0e-44d1-af83-a8c9a7f2cf48-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.890411 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zspj\" (UniqueName: \"kubernetes.io/projected/d572008e-db0e-44d1-af83-a8c9a7f2cf48-kube-api-access-7zspj\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.100865 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z95qg"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.106204 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z95qg"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.290714 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.307498 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn"] Jan 28 18:35:24 crc kubenswrapper[4985]: W0128 18:35:24.343067 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9b84394_02f1_4bde_befe_a2a649925c93.slice/crio-97d7c04ac820f964fa6642f81afb510cfa4d81e3a4c59a4261b946d8482d0f3e WatchSource:0}: Error finding container 97d7c04ac820f964fa6642f81afb510cfa4d81e3a4c59a4261b946d8482d0f3e: Status 404 returned error can't find the container with id 97d7c04ac820f964fa6642f81afb510cfa4d81e3a4c59a4261b946d8482d0f3e Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.352055 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.505323 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwbpd\" (UniqueName: \"kubernetes.io/projected/d902791c-2d1f-4c1d-9351-6ef3788b3b77-kube-api-access-zwbpd\") pod \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.505719 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-config\") pod \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.505862 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-dns-svc\") pod \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.506399 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-config" (OuterVolumeSpecName: "config") pod "d902791c-2d1f-4c1d-9351-6ef3788b3b77" (UID: "d902791c-2d1f-4c1d-9351-6ef3788b3b77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.506764 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d902791c-2d1f-4c1d-9351-6ef3788b3b77" (UID: "d902791c-2d1f-4c1d-9351-6ef3788b3b77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.508948 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9r84t"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.510420 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d902791c-2d1f-4c1d-9351-6ef3788b3b77-kube-api-access-zwbpd" (OuterVolumeSpecName: "kube-api-access-zwbpd") pod "d902791c-2d1f-4c1d-9351-6ef3788b3b77" (UID: "d902791c-2d1f-4c1d-9351-6ef3788b3b77"). InnerVolumeSpecName "kube-api-access-zwbpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.609108 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.609141 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwbpd\" (UniqueName: \"kubernetes.io/projected/d902791c-2d1f-4c1d-9351-6ef3788b3b77-kube-api-access-zwbpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.609152 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.753668 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" event={"ID":"d902791c-2d1f-4c1d-9351-6ef3788b3b77","Type":"ContainerDied","Data":"726d39ad443f4cf7528eaa7e16886673ba8250d6c2d954f18e44637adfce94f5"} Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.754069 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.777770 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9r84t" event={"ID":"2d1c1ab5-7e43-47cd-8218-3d945574a79c","Type":"ContainerStarted","Data":"ebce52a94b4fb29c30b89c997e292645481163c57e0edf829e59a0a3b4cc6094"} Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.782802 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.787065 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" event={"ID":"c9b84394-02f1-4bde-befe-a2a649925c93","Type":"ContainerStarted","Data":"97d7c04ac820f964fa6642f81afb510cfa4d81e3a4c59a4261b946d8482d0f3e"} Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.790243 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6e1c7625-25e1-442f-9f71-5d2a9323306c","Type":"ContainerStarted","Data":"076cb278f179a7d28ea480b3e3ec46d4a5cc5412e18855f107c2554883d7d67c"} Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.854126 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x78r6"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.881120 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x78r6"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.896962 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 18:35:24 crc kubenswrapper[4985]: W0128 18:35:24.990980 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96162e6f_966d_438d_9362_ef03abc4b277.slice/crio-e0335762536628c672e38c65f8ba0c729df89b224221c2b13c1cb19cb0e6ee22 WatchSource:0}: Error finding container e0335762536628c672e38c65f8ba0c729df89b224221c2b13c1cb19cb0e6ee22: Status 404 returned error can't find the container with id e0335762536628c672e38c65f8ba0c729df89b224221c2b13c1cb19cb0e6ee22 Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.991933 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f287q"] Jan 28 18:35:24 crc kubenswrapper[4985]: W0128 18:35:24.997327 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c181f14_26b7_49f4_9ae0_869d9b291938.slice/crio-ca0bb8b5399b511a513e3b1f1d114eeeb939d9fe220f62c4ae70ed6aff99afb9 WatchSource:0}: Error finding container ca0bb8b5399b511a513e3b1f1d114eeeb939d9fe220f62c4ae70ed6aff99afb9: Status 404 returned error can't find the container with id ca0bb8b5399b511a513e3b1f1d114eeeb939d9fe220f62c4ae70ed6aff99afb9 Jan 28 18:35:25 crc kubenswrapper[4985]: W0128 18:35:25.013366 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76ff3fb3_d9e1_41dc_a644_8ac29cb97d11.slice/crio-8abe09c7604dfb391e40de5a4e3d7ff05d0fc7455a2e80d39a82d081f4c22406 WatchSource:0}: Error finding container 8abe09c7604dfb391e40de5a4e3d7ff05d0fc7455a2e80d39a82d081f4c22406: Status 404 returned error can't find the container with id 8abe09c7604dfb391e40de5a4e3d7ff05d0fc7455a2e80d39a82d081f4c22406 Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.277398 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d572008e-db0e-44d1-af83-a8c9a7f2cf48" path="/var/lib/kubelet/pods/d572008e-db0e-44d1-af83-a8c9a7f2cf48/volumes" Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.277787 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d902791c-2d1f-4c1d-9351-6ef3788b3b77" path="/var/lib/kubelet/pods/d902791c-2d1f-4c1d-9351-6ef3788b3b77/volumes" Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.800328 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74779d9b4-2xxwx" event={"ID":"6b348b0a-4b9a-4216-adbf-02bcefe1f011","Type":"ContainerStarted","Data":"64451822b6a5d78bf7c6fef9ea73354b476e0858e3dd3396503a08a9645b7247"} Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.803028 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f287q" event={"ID":"2c181f14-26b7-49f4-9ae0-869d9b291938","Type":"ContainerStarted","Data":"ca0bb8b5399b511a513e3b1f1d114eeeb939d9fe220f62c4ae70ed6aff99afb9"} Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.804285 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11","Type":"ContainerStarted","Data":"8abe09c7604dfb391e40de5a4e3d7ff05d0fc7455a2e80d39a82d081f4c22406"} Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.806197 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8","Type":"ContainerStarted","Data":"3dc2fb534ca52f8faf7f4cde3f2dda84c2df48066734fe6ac9c5b40591a7af86"} Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.808002 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerStarted","Data":"e0335762536628c672e38c65f8ba0c729df89b224221c2b13c1cb19cb0e6ee22"} Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.825612 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-74779d9b4-2xxwx" podStartSLOduration=20.825594407 podStartE2EDuration="20.825594407s" podCreationTimestamp="2026-01-28 18:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:25.817435476 +0000 UTC m=+1336.643998297" watchObservedRunningTime="2026-01-28 18:35:25.825594407 +0000 UTC m=+1336.652157228" Jan 28 18:35:26 crc kubenswrapper[4985]: I0128 18:35:26.326025 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:26 crc kubenswrapper[4985]: I0128 18:35:26.326178 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:26 crc kubenswrapper[4985]: I0128 18:35:26.333169 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:26 crc kubenswrapper[4985]: I0128 18:35:26.819708 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541","Type":"ContainerStarted","Data":"51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517"} Jan 28 18:35:26 crc kubenswrapper[4985]: I0128 18:35:26.827614 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:26 crc kubenswrapper[4985]: I0128 18:35:26.963723 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64878fb8f-ljltp"] Jan 28 18:35:27 crc kubenswrapper[4985]: I0128 18:35:27.832497 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8253e52-6b52-45a9-b5d6-680d3dfbebe7","Type":"ContainerStarted","Data":"48b9afd0e8ea6f4d4858d6f84a49b2f7c97a3a8f124cd52fc3574f7899a262df"} Jan 28 18:35:27 crc kubenswrapper[4985]: I0128 18:35:27.837145 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"313d3857-140a-4a66-8329-12453fc8dd4c","Type":"ContainerStarted","Data":"4546478e3b48ee65a1e4f5b248d4caed2739a0baae4f2cf1c67d5da021b79ce7"} Jan 28 18:35:27 crc kubenswrapper[4985]: I0128 18:35:27.842424 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"88fe31db-8414-43ac-b547-fa0278d9508f","Type":"ContainerStarted","Data":"b2ceb9916f921708e12af47eab44ac983832d4dd7d69425eda27d0fb98bed8c0"} Jan 28 18:35:27 crc kubenswrapper[4985]: I0128 18:35:27.888747 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=5.657164659 podStartE2EDuration="25.888721833s" podCreationTimestamp="2026-01-28 18:35:02 +0000 UTC" firstStartedPulling="2026-01-28 18:35:04.100441045 +0000 UTC m=+1314.927003866" lastFinishedPulling="2026-01-28 18:35:24.331998219 +0000 UTC m=+1335.158561040" observedRunningTime="2026-01-28 18:35:27.884141174 +0000 UTC m=+1338.710704005" watchObservedRunningTime="2026-01-28 18:35:27.888721833 +0000 UTC m=+1338.715284664" Jan 28 18:35:28 crc kubenswrapper[4985]: I0128 18:35:28.857981 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"41c1858c-ad6e-441f-b998-c57290cc5d68","Type":"ContainerStarted","Data":"dfcb150ccda2aa4d1050a6d900540fe9f90c22d4f5256e19b6eeee11fa6e624a"} Jan 28 18:35:28 crc kubenswrapper[4985]: I0128 18:35:28.866482 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9549037f-5867-44ac-86dc-a02105e4c414","Type":"ContainerStarted","Data":"bb84d317406cd6ce8331d52ba3971c969e272858edb60fe48bf5c6408f6194f8"} Jan 28 18:35:28 crc kubenswrapper[4985]: I0128 18:35:28.866947 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 28 18:35:29 crc kubenswrapper[4985]: I0128 18:35:29.878313 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4b8dd73-ff4d-44d3-b30f-a994e993392d","Type":"ContainerStarted","Data":"926ee0d9744c84d616cdd1efef14930926916bccab52a9fc5bcb156c80c24d29"} Jan 28 18:35:29 crc kubenswrapper[4985]: I0128 18:35:29.880285 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.222049 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.539399587 podStartE2EDuration="27.22202594s" podCreationTimestamp="2026-01-28 18:35:04 +0000 UTC" firstStartedPulling="2026-01-28 18:35:05.773817968 +0000 UTC m=+1316.600380789" lastFinishedPulling="2026-01-28 18:35:28.456444331 +0000 UTC m=+1339.283007142" observedRunningTime="2026-01-28 18:35:29.914686932 +0000 UTC m=+1340.741249753" watchObservedRunningTime="2026-01-28 18:35:31.22202594 +0000 UTC m=+1342.048588761" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.230235 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vsdt5"] Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.231924 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.242602 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.304330 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d67712df-b1fe-463f-9a6c-c0591aa6cec2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.304798 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d67712df-b1fe-463f-9a6c-c0591aa6cec2-ovs-rundir\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.304993 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67712df-b1fe-463f-9a6c-c0591aa6cec2-combined-ca-bundle\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.305067 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lll8z\" (UniqueName: \"kubernetes.io/projected/d67712df-b1fe-463f-9a6c-c0591aa6cec2-kube-api-access-lll8z\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.305096 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d67712df-b1fe-463f-9a6c-c0591aa6cec2-config\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.305144 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d67712df-b1fe-463f-9a6c-c0591aa6cec2-ovn-rundir\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.304375 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vsdt5"] Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.915466 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d67712df-b1fe-463f-9a6c-c0591aa6cec2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.915556 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d67712df-b1fe-463f-9a6c-c0591aa6cec2-ovs-rundir\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.915641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67712df-b1fe-463f-9a6c-c0591aa6cec2-combined-ca-bundle\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.915689 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lll8z\" (UniqueName: \"kubernetes.io/projected/d67712df-b1fe-463f-9a6c-c0591aa6cec2-kube-api-access-lll8z\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.915717 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d67712df-b1fe-463f-9a6c-c0591aa6cec2-config\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.915746 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d67712df-b1fe-463f-9a6c-c0591aa6cec2-ovn-rundir\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.916042 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d67712df-b1fe-463f-9a6c-c0591aa6cec2-ovn-rundir\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.919755 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d67712df-b1fe-463f-9a6c-c0591aa6cec2-ovs-rundir\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.920129 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d67712df-b1fe-463f-9a6c-c0591aa6cec2-config\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.928620 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67712df-b1fe-463f-9a6c-c0591aa6cec2-combined-ca-bundle\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.947029 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d67712df-b1fe-463f-9a6c-c0591aa6cec2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.956190 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lll8z\" (UniqueName: \"kubernetes.io/projected/d67712df-b1fe-463f-9a6c-c0591aa6cec2-kube-api-access-lll8z\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.061522 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ndmmr"] Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.097744 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kf7j5"] Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.139131 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kf7j5"] Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.139306 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.142059 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.176280 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.300679 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2ltmw"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.331025 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn2k8\" (UniqueName: \"kubernetes.io/projected/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-kube-api-access-rn2k8\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.331096 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.331190 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.331230 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-config\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.359464 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbd6h"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.361552 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.369368 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.391167 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbd6h"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440150 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440297 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-config\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440375 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-config\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440664 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440726 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn2k8\" (UniqueName: \"kubernetes.io/projected/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-kube-api-access-rn2k8\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440810 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440829 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcghp\" (UniqueName: \"kubernetes.io/projected/dadb283d-7f9f-414c-9017-f8c0875878ad-kube-api-access-mcghp\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440989 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.441045 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-dns-svc\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.441331 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-config\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.441456 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.442097 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.475321 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn2k8\" (UniqueName: \"kubernetes.io/projected/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-kube-api-access-rn2k8\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.543667 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.543738 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-dns-svc\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.543785 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-config\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.544285 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.544529 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcghp\" (UniqueName: \"kubernetes.io/projected/dadb283d-7f9f-414c-9017-f8c0875878ad-kube-api-access-mcghp\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.545033 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.545342 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.545371 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-config\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.545987 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-dns-svc\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.565051 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcghp\" (UniqueName: \"kubernetes.io/projected/dadb283d-7f9f-414c-9017-f8c0875878ad-kube-api-access-mcghp\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.651643 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.717678 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.769867 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.157481 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.184649 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kf7j5"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.213549 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-f4mq4"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.215909 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.237097 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-f4mq4"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.309924 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.310095 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.310159 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-config\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.310211 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdwqb\" (UniqueName: \"kubernetes.io/projected/fa80be1e-734c-44bc-a957-137332ecd58a-kube-api-access-xdwqb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.310283 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.413456 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.413550 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-config\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.413632 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdwqb\" (UniqueName: \"kubernetes.io/projected/fa80be1e-734c-44bc-a957-137332ecd58a-kube-api-access-xdwqb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.413682 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.413764 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.415106 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.415524 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-config\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.416133 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.416340 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.444461 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdwqb\" (UniqueName: \"kubernetes.io/projected/fa80be1e-734c-44bc-a957-137332ecd58a-kube-api-access-xdwqb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.536550 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.134297 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.146508 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cthrq\" (UniqueName: \"kubernetes.io/projected/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-kube-api-access-cthrq\") pod \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.146615 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-dns-svc\") pod \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.146758 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-config\") pod \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.147961 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-config" (OuterVolumeSpecName: "config") pod "1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c" (UID: "1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.149521 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c" (UID: "1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.179610 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-kube-api-access-cthrq" (OuterVolumeSpecName: "kube-api-access-cthrq") pod "1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c" (UID: "1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c"). InnerVolumeSpecName "kube-api-access-cthrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.252800 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cthrq\" (UniqueName: \"kubernetes.io/projected/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-kube-api-access-cthrq\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.252848 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.252862 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.284037 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.291048 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.291191 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.295578 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.295629 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.295582 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.295902 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-szwvs" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.359691 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-922sb\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-kube-api-access-922sb\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.360153 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.360235 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-cache\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.360543 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-lock\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.360586 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.360638 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.462326 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-cache\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.462436 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-lock\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.462456 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.462488 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.462529 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-922sb\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-kube-api-access-922sb\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.462638 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: E0128 18:35:37.462685 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:35:37 crc kubenswrapper[4985]: E0128 18:35:37.462708 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:35:37 crc kubenswrapper[4985]: E0128 18:35:37.462764 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift podName:4b55b35c-0ef1-4db8-b435-24de7fda8ecc nodeName:}" failed. No retries permitted until 2026-01-28 18:35:37.962744361 +0000 UTC m=+1348.789307182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift") pod "swift-storage-0" (UID: "4b55b35c-0ef1-4db8-b435-24de7fda8ecc") : configmap "swift-ring-files" not found Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.463144 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-lock\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.463622 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-cache\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.469778 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.469821 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f34bf9770dd49758400121ece696bba237212777a54e7b942c1c852077ee2a45/globalmount\"" pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.503233 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.511428 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-922sb\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-kube-api-access-922sb\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.511437 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.794844 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.868890 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-config\") pod \"ee74e7b2-a80e-4390-afec-a13db1b25da2\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.869461 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwhbp\" (UniqueName: \"kubernetes.io/projected/ee74e7b2-a80e-4390-afec-a13db1b25da2-kube-api-access-qwhbp\") pod \"ee74e7b2-a80e-4390-afec-a13db1b25da2\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.869538 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-config" (OuterVolumeSpecName: "config") pod "ee74e7b2-a80e-4390-afec-a13db1b25da2" (UID: "ee74e7b2-a80e-4390-afec-a13db1b25da2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.869601 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-dns-svc\") pod \"ee74e7b2-a80e-4390-afec-a13db1b25da2\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.870221 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.870543 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee74e7b2-a80e-4390-afec-a13db1b25da2" (UID: "ee74e7b2-a80e-4390-afec-a13db1b25da2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.873561 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee74e7b2-a80e-4390-afec-a13db1b25da2-kube-api-access-qwhbp" (OuterVolumeSpecName: "kube-api-access-qwhbp") pod "ee74e7b2-a80e-4390-afec-a13db1b25da2" (UID: "ee74e7b2-a80e-4390-afec-a13db1b25da2"). InnerVolumeSpecName "kube-api-access-qwhbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.972796 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: E0128 18:35:37.972992 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:35:37 crc kubenswrapper[4985]: E0128 18:35:37.973014 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:35:37 crc kubenswrapper[4985]: E0128 18:35:37.973071 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift podName:4b55b35c-0ef1-4db8-b435-24de7fda8ecc nodeName:}" failed. No retries permitted until 2026-01-28 18:35:38.973053557 +0000 UTC m=+1349.799616378 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift") pod "swift-storage-0" (UID: "4b55b35c-0ef1-4db8-b435-24de7fda8ecc") : configmap "swift-ring-files" not found Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.973092 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwhbp\" (UniqueName: \"kubernetes.io/projected/ee74e7b2-a80e-4390-afec-a13db1b25da2-kube-api-access-qwhbp\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.973108 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.015704 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" event={"ID":"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c","Type":"ContainerDied","Data":"3c5466552d205ed11bf957206c330067f0b5fafb2460f8946f1184b0e9c10d6b"} Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.015747 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.017098 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" event={"ID":"ee74e7b2-a80e-4390-afec-a13db1b25da2","Type":"ContainerDied","Data":"31619f9163f0c27ee787dc3b6d91d67625b016d70dc4088ba8f6f0161f7d8376"} Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.017126 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.118310 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2ltmw"] Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.132757 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2ltmw"] Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.159631 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ndmmr"] Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.186035 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ndmmr"] Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.993948 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:38 crc kubenswrapper[4985]: E0128 18:35:38.994957 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:35:38 crc kubenswrapper[4985]: E0128 18:35:38.994983 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:35:38 crc kubenswrapper[4985]: E0128 18:35:38.995039 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift podName:4b55b35c-0ef1-4db8-b435-24de7fda8ecc nodeName:}" failed. No retries permitted until 2026-01-28 18:35:40.99502172 +0000 UTC m=+1351.821584541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift") pod "swift-storage-0" (UID: "4b55b35c-0ef1-4db8-b435-24de7fda8ecc") : configmap "swift-ring-files" not found Jan 28 18:35:39 crc kubenswrapper[4985]: I0128 18:35:39.278434 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c" path="/var/lib/kubelet/pods/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c/volumes" Jan 28 18:35:39 crc kubenswrapper[4985]: I0128 18:35:39.279487 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee74e7b2-a80e-4390-afec-a13db1b25da2" path="/var/lib/kubelet/pods/ee74e7b2-a80e-4390-afec-a13db1b25da2/volumes" Jan 28 18:35:39 crc kubenswrapper[4985]: I0128 18:35:39.473392 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-f4mq4"] Jan 28 18:35:39 crc kubenswrapper[4985]: W0128 18:35:39.481380 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa80be1e_734c_44bc_a957_137332ecd58a.slice/crio-d7aa5495d851ceb3cfab59b851d20f52e6f54fcefbf4bc770429b29199850e87 WatchSource:0}: Error finding container d7aa5495d851ceb3cfab59b851d20f52e6f54fcefbf4bc770429b29199850e87: Status 404 returned error can't find the container with id d7aa5495d851ceb3cfab59b851d20f52e6f54fcefbf4bc770429b29199850e87 Jan 28 18:35:39 crc kubenswrapper[4985]: I0128 18:35:39.484091 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kf7j5"] Jan 28 18:35:39 crc kubenswrapper[4985]: I0128 18:35:39.494480 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vsdt5"] Jan 28 18:35:39 crc kubenswrapper[4985]: I0128 18:35:39.726715 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbd6h"] Jan 28 18:35:39 crc kubenswrapper[4985]: W0128 18:35:39.726906 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddadb283d_7f9f_414c_9017_f8c0875878ad.slice/crio-8651fb5de970f4dd3ff0cc87b132ffe1891fcfecc007311983832fbce5848762 WatchSource:0}: Error finding container 8651fb5de970f4dd3ff0cc87b132ffe1891fcfecc007311983832fbce5848762: Status 404 returned error can't find the container with id 8651fb5de970f4dd3ff0cc87b132ffe1891fcfecc007311983832fbce5848762 Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.059023 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" event={"ID":"c9b84394-02f1-4bde-befe-a2a649925c93","Type":"ContainerStarted","Data":"10ed3a239138cda36178fa97f77027b6bb27361007e7a5dfba71518cc70cc9e7"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.060834 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vsdt5" event={"ID":"d67712df-b1fe-463f-9a6c-c0591aa6cec2","Type":"ContainerStarted","Data":"ce62da9ab4ad5ebe9ac484655e095e764a13892f2927ef24b033182c66dbaa4e"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.062980 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11","Type":"ContainerStarted","Data":"530a57d4fcc58a7444990734dca2f387a5beaeeefa1e7184ab5c1cd39f839253"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.064336 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" event={"ID":"fa80be1e-734c-44bc-a957-137332ecd58a","Type":"ContainerStarted","Data":"d7aa5495d851ceb3cfab59b851d20f52e6f54fcefbf4bc770429b29199850e87"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.066956 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbd6h" event={"ID":"dadb283d-7f9f-414c-9017-f8c0875878ad","Type":"ContainerStarted","Data":"8651fb5de970f4dd3ff0cc87b132ffe1891fcfecc007311983832fbce5848762"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.068972 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9r84t" event={"ID":"2d1c1ab5-7e43-47cd-8218-3d945574a79c","Type":"ContainerStarted","Data":"476a165e5ac1277d2ba38cef9c019671f5007fa52413c290f1e43a7139b37662"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.069067 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-9r84t" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.070670 4985 generic.go:334] "Generic (PLEG): container finished" podID="2c181f14-26b7-49f4-9ae0-869d9b291938" containerID="b3532c01bd8307d25c0ad6b941e217b75cf8f836e9ddc2623bf3d7cfac146df1" exitCode=0 Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.070715 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f287q" event={"ID":"2c181f14-26b7-49f4-9ae0-869d9b291938","Type":"ContainerDied","Data":"b3532c01bd8307d25c0ad6b941e217b75cf8f836e9ddc2623bf3d7cfac146df1"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.072407 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6e1c7625-25e1-442f-9f71-5d2a9323306c","Type":"ContainerStarted","Data":"cdb7a2c935be73f6614fdc0b3e030d51920f96308f271b19791dab132d08302b"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.073401 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" event={"ID":"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3","Type":"ContainerStarted","Data":"675439af974dddbf47cd9e99f2088bc55d3793ed853e1f96188d1c6dfc1f7742"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.091749 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" podStartSLOduration=20.495170079 podStartE2EDuration="35.091727283s" podCreationTimestamp="2026-01-28 18:35:05 +0000 UTC" firstStartedPulling="2026-01-28 18:35:24.351125649 +0000 UTC m=+1335.177688470" lastFinishedPulling="2026-01-28 18:35:38.947682853 +0000 UTC m=+1349.774245674" observedRunningTime="2026-01-28 18:35:40.076784761 +0000 UTC m=+1350.903347582" watchObservedRunningTime="2026-01-28 18:35:40.091727283 +0000 UTC m=+1350.918290104" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.147691 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9r84t" podStartSLOduration=18.80090625 podStartE2EDuration="33.147665532s" podCreationTimestamp="2026-01-28 18:35:07 +0000 UTC" firstStartedPulling="2026-01-28 18:35:24.583012026 +0000 UTC m=+1335.409574847" lastFinishedPulling="2026-01-28 18:35:38.929771308 +0000 UTC m=+1349.756334129" observedRunningTime="2026-01-28 18:35:40.138813212 +0000 UTC m=+1350.965376043" watchObservedRunningTime="2026-01-28 18:35:40.147665532 +0000 UTC m=+1350.974228353" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.383810 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-6lq9x"] Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.390056 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.393899 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.394002 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.394134 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.406840 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6lq9x"] Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.447367 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0714595-ac9e-4945-9250-6f499317070d-etc-swift\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.447428 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-swiftconf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.447471 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-scripts\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.447846 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-ring-data-devices\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.448020 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-dispersionconf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.448121 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-combined-ca-bundle\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.448164 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hdhf\" (UniqueName: \"kubernetes.io/projected/c0714595-ac9e-4945-9250-6f499317070d-kube-api-access-9hdhf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.473541 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-l4q82"] Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.475486 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.489328 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6lq9x"] Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.507584 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-l4q82"] Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550140 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-combined-ca-bundle\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550203 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-ring-data-devices\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550243 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hdhf\" (UniqueName: \"kubernetes.io/projected/c0714595-ac9e-4945-9250-6f499317070d-kube-api-access-9hdhf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550286 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-dispersionconf\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550329 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbrps\" (UniqueName: \"kubernetes.io/projected/75109476-5e36-45b8-afb9-1e7f3a9331f9-kube-api-access-rbrps\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550350 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-combined-ca-bundle\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550378 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0714595-ac9e-4945-9250-6f499317070d-etc-swift\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550397 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/75109476-5e36-45b8-afb9-1e7f3a9331f9-etc-swift\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550427 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-swiftconf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550460 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-scripts\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550496 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-swiftconf\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550512 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-ring-data-devices\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550547 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-scripts\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550576 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-dispersionconf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.555539 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0714595-ac9e-4945-9250-6f499317070d-etc-swift\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.557147 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-dispersionconf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.558358 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-scripts\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.558434 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-ring-data-devices\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.561535 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-combined-ca-bundle\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.562893 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-swiftconf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.578816 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hdhf\" (UniqueName: \"kubernetes.io/projected/c0714595-ac9e-4945-9250-6f499317070d-kube-api-access-9hdhf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652381 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbrps\" (UniqueName: \"kubernetes.io/projected/75109476-5e36-45b8-afb9-1e7f3a9331f9-kube-api-access-rbrps\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652446 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-combined-ca-bundle\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652490 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/75109476-5e36-45b8-afb9-1e7f3a9331f9-etc-swift\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652607 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-swiftconf\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652646 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-scripts\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652746 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-ring-data-devices\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652811 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-dispersionconf\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.656188 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/75109476-5e36-45b8-afb9-1e7f3a9331f9-etc-swift\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.656373 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-ring-data-devices\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.657975 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-scripts\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.658460 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-combined-ca-bundle\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.661956 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-dispersionconf\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.662281 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-swiftconf\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.733118 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbrps\" (UniqueName: \"kubernetes.io/projected/75109476-5e36-45b8-afb9-1e7f3a9331f9-kube-api-access-rbrps\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.846667 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.863156 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.064590 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:41 crc kubenswrapper[4985]: E0128 18:35:41.065295 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:35:41 crc kubenswrapper[4985]: E0128 18:35:41.065317 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:35:41 crc kubenswrapper[4985]: E0128 18:35:41.065384 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift podName:4b55b35c-0ef1-4db8-b435-24de7fda8ecc nodeName:}" failed. No retries permitted until 2026-01-28 18:35:45.065362151 +0000 UTC m=+1355.891924972 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift") pod "swift-storage-0" (UID: "4b55b35c-0ef1-4db8-b435-24de7fda8ecc") : configmap "swift-ring-files" not found Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.098833 4985 generic.go:334] "Generic (PLEG): container finished" podID="a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" containerID="a6103d5721e8d5e8d69b116fa910ec638e1c66737a310fcba779b01a88563be1" exitCode=0 Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.098928 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" event={"ID":"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3","Type":"ContainerDied","Data":"a6103d5721e8d5e8d69b116fa910ec638e1c66737a310fcba779b01a88563be1"} Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.102551 4985 generic.go:334] "Generic (PLEG): container finished" podID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerID="48b9afd0e8ea6f4d4858d6f84a49b2f7c97a3a8f124cd52fc3574f7899a262df" exitCode=0 Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.102651 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8253e52-6b52-45a9-b5d6-680d3dfbebe7","Type":"ContainerDied","Data":"48b9afd0e8ea6f4d4858d6f84a49b2f7c97a3a8f124cd52fc3574f7899a262df"} Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.128328 4985 generic.go:334] "Generic (PLEG): container finished" podID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerID="3dc2fb534ca52f8faf7f4cde3f2dda84c2df48066734fe6ac9c5b40591a7af86" exitCode=0 Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.128412 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8","Type":"ContainerDied","Data":"3dc2fb534ca52f8faf7f4cde3f2dda84c2df48066734fe6ac9c5b40591a7af86"} Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.146210 4985 generic.go:334] "Generic (PLEG): container finished" podID="fa80be1e-734c-44bc-a957-137332ecd58a" containerID="b07a966b1eedec1e93ccdffea190010036fa22a709598fabaaf5909bac14f589" exitCode=0 Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.147586 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" event={"ID":"fa80be1e-734c-44bc-a957-137332ecd58a","Type":"ContainerDied","Data":"b07a966b1eedec1e93ccdffea190010036fa22a709598fabaaf5909bac14f589"} Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.175454 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbd6h" event={"ID":"dadb283d-7f9f-414c-9017-f8c0875878ad","Type":"ContainerStarted","Data":"68193873dff4bd6a35834f28702dce0fa7f1463ec5af6dd5571aab6e1aa60d3d"} Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.187709 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.188059 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.442245 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-l4q82"] Jan 28 18:35:41 crc kubenswrapper[4985]: W0128 18:35:41.457811 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75109476_5e36_45b8_afb9_1e7f3a9331f9.slice/crio-c1416088ef67bc8d80926482d433fdd2be41d91a244a0f52cf43dc4e1bdb2314 WatchSource:0}: Error finding container c1416088ef67bc8d80926482d433fdd2be41d91a244a0f52cf43dc4e1bdb2314: Status 404 returned error can't find the container with id c1416088ef67bc8d80926482d433fdd2be41d91a244a0f52cf43dc4e1bdb2314 Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.668748 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6lq9x"] Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.777934 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.903232 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-ovsdbserver-sb\") pod \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.903872 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn2k8\" (UniqueName: \"kubernetes.io/projected/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-kube-api-access-rn2k8\") pod \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.903960 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-config\") pod \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.904081 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-dns-svc\") pod \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.911430 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-kube-api-access-rn2k8" (OuterVolumeSpecName: "kube-api-access-rn2k8") pod "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" (UID: "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3"). InnerVolumeSpecName "kube-api-access-rn2k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.982554 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" (UID: "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.000427 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" (UID: "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.001196 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-config" (OuterVolumeSpecName: "config") pod "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" (UID: "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.006528 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.006569 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn2k8\" (UniqueName: \"kubernetes.io/projected/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-kube-api-access-rn2k8\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.006582 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.006590 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.199692 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f287q" event={"ID":"2c181f14-26b7-49f4-9ae0-869d9b291938","Type":"ContainerStarted","Data":"915b604eb65ad128607175fc36fd28a21541e6d64dcf795a8773b255c6feb3c7"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.199757 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f287q" event={"ID":"2c181f14-26b7-49f4-9ae0-869d9b291938","Type":"ContainerStarted","Data":"ceb50d163fa3519c9657532c007f0ca735c8deae4820e378cf9b4069247a0b84"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.200294 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.200325 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.202735 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l4q82" event={"ID":"75109476-5e36-45b8-afb9-1e7f3a9331f9","Type":"ContainerStarted","Data":"c1416088ef67bc8d80926482d433fdd2be41d91a244a0f52cf43dc4e1bdb2314"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.211513 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8","Type":"ContainerStarted","Data":"e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.218788 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" event={"ID":"fa80be1e-734c-44bc-a957-137332ecd58a","Type":"ContainerStarted","Data":"7bf8dbd2dcbc5b0a1855cc79c5970c28806a8595e366298bec9e80900e68f659"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.218980 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.222974 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6lq9x" event={"ID":"c0714595-ac9e-4945-9250-6f499317070d","Type":"ContainerStarted","Data":"8984873f7fbeb5534245e789d9a64682aba9641126cebac96c088a070c8c95bb"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.234771 4985 generic.go:334] "Generic (PLEG): container finished" podID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerID="68193873dff4bd6a35834f28702dce0fa7f1463ec5af6dd5571aab6e1aa60d3d" exitCode=0 Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.234845 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbd6h" event={"ID":"dadb283d-7f9f-414c-9017-f8c0875878ad","Type":"ContainerDied","Data":"68193873dff4bd6a35834f28702dce0fa7f1463ec5af6dd5571aab6e1aa60d3d"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.234875 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbd6h" event={"ID":"dadb283d-7f9f-414c-9017-f8c0875878ad","Type":"ContainerStarted","Data":"4fbdfdf2644365e56621c8dd65f4dc2403575c997b33777a83fc07aed15bfdce"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.236777 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.237916 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-f287q" podStartSLOduration=21.340028475 podStartE2EDuration="35.237873313s" podCreationTimestamp="2026-01-28 18:35:07 +0000 UTC" firstStartedPulling="2026-01-28 18:35:25.004694911 +0000 UTC m=+1335.831257732" lastFinishedPulling="2026-01-28 18:35:38.902539749 +0000 UTC m=+1349.729102570" observedRunningTime="2026-01-28 18:35:42.224381722 +0000 UTC m=+1353.050944553" watchObservedRunningTime="2026-01-28 18:35:42.237873313 +0000 UTC m=+1353.064436134" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.239820 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" event={"ID":"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3","Type":"ContainerDied","Data":"675439af974dddbf47cd9e99f2088bc55d3793ed853e1f96188d1c6dfc1f7742"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.239862 4985 scope.go:117] "RemoveContainer" containerID="a6103d5721e8d5e8d69b116fa910ec638e1c66737a310fcba779b01a88563be1" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.239993 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.261835 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8253e52-6b52-45a9-b5d6-680d3dfbebe7","Type":"ContainerStarted","Data":"c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.264901 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podStartSLOduration=6.799873137 podStartE2EDuration="7.264875035s" podCreationTimestamp="2026-01-28 18:35:35 +0000 UTC" firstStartedPulling="2026-01-28 18:35:39.484565211 +0000 UTC m=+1350.311128032" lastFinishedPulling="2026-01-28 18:35:39.949567069 +0000 UTC m=+1350.776129930" observedRunningTime="2026-01-28 18:35:42.251633982 +0000 UTC m=+1353.078196813" watchObservedRunningTime="2026-01-28 18:35:42.264875035 +0000 UTC m=+1353.091437856" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.279096 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.362210927 podStartE2EDuration="43.279062046s" podCreationTimestamp="2026-01-28 18:34:59 +0000 UTC" firstStartedPulling="2026-01-28 18:35:04.017557735 +0000 UTC m=+1314.844120566" lastFinishedPulling="2026-01-28 18:35:23.934408864 +0000 UTC m=+1334.760971685" observedRunningTime="2026-01-28 18:35:42.275641989 +0000 UTC m=+1353.102204810" watchObservedRunningTime="2026-01-28 18:35:42.279062046 +0000 UTC m=+1353.105624867" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.322553 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-sbd6h" podStartSLOduration=9.748896338 podStartE2EDuration="10.322528593s" podCreationTimestamp="2026-01-28 18:35:32 +0000 UTC" firstStartedPulling="2026-01-28 18:35:39.729754163 +0000 UTC m=+1350.556316984" lastFinishedPulling="2026-01-28 18:35:40.303386418 +0000 UTC m=+1351.129949239" observedRunningTime="2026-01-28 18:35:42.304314759 +0000 UTC m=+1353.130877590" watchObservedRunningTime="2026-01-28 18:35:42.322528593 +0000 UTC m=+1353.149091414" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.360054 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=22.166123711 podStartE2EDuration="42.360030042s" podCreationTimestamp="2026-01-28 18:35:00 +0000 UTC" firstStartedPulling="2026-01-28 18:35:04.331276972 +0000 UTC m=+1315.157839793" lastFinishedPulling="2026-01-28 18:35:24.525183303 +0000 UTC m=+1335.351746124" observedRunningTime="2026-01-28 18:35:42.329796008 +0000 UTC m=+1353.156358829" watchObservedRunningTime="2026-01-28 18:35:42.360030042 +0000 UTC m=+1353.186592863" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.371606 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kf7j5"] Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.380371 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kf7j5"] Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.583434 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.583490 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:43 crc kubenswrapper[4985]: I0128 18:35:43.279103 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" path="/var/lib/kubelet/pods/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3/volumes" Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.279592 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vsdt5" event={"ID":"d67712df-b1fe-463f-9a6c-c0591aa6cec2","Type":"ContainerStarted","Data":"95c3e5aa1cefcadf132fa9c16f2ebce0b4609c97428c17b58c9b0666940e9a66"} Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.285771 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11","Type":"ContainerStarted","Data":"90271bf8a8a83b77da89912a0b1e37403508523bddff9f8d403b25844dea1383"} Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.289729 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6e1c7625-25e1-442f-9f71-5d2a9323306c","Type":"ContainerStarted","Data":"d4f8e68010b80f72bdfffb75c6fd4d5190736525ed76f427c0d1e127e9609bcc"} Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.308584 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vsdt5" podStartSLOduration=8.899183517 podStartE2EDuration="13.308561794s" podCreationTimestamp="2026-01-28 18:35:31 +0000 UTC" firstStartedPulling="2026-01-28 18:35:39.486850716 +0000 UTC m=+1350.313413537" lastFinishedPulling="2026-01-28 18:35:43.896228993 +0000 UTC m=+1354.722791814" observedRunningTime="2026-01-28 18:35:44.300163427 +0000 UTC m=+1355.126726268" watchObservedRunningTime="2026-01-28 18:35:44.308561794 +0000 UTC m=+1355.135124625" Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.341913 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.537851043 podStartE2EDuration="34.341890685s" podCreationTimestamp="2026-01-28 18:35:10 +0000 UTC" firstStartedPulling="2026-01-28 18:35:25.015716852 +0000 UTC m=+1335.842279673" lastFinishedPulling="2026-01-28 18:35:43.819756494 +0000 UTC m=+1354.646319315" observedRunningTime="2026-01-28 18:35:44.332016856 +0000 UTC m=+1355.158579697" watchObservedRunningTime="2026-01-28 18:35:44.341890685 +0000 UTC m=+1355.168453506" Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.369432 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.84456502 podStartE2EDuration="37.369408702s" podCreationTimestamp="2026-01-28 18:35:07 +0000 UTC" firstStartedPulling="2026-01-28 18:35:24.349221005 +0000 UTC m=+1335.175783836" lastFinishedPulling="2026-01-28 18:35:43.874064707 +0000 UTC m=+1354.700627518" observedRunningTime="2026-01-28 18:35:44.347518924 +0000 UTC m=+1355.174081745" watchObservedRunningTime="2026-01-28 18:35:44.369408702 +0000 UTC m=+1355.195971513" Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.818966 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.862535 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.067476 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.087217 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:45 crc kubenswrapper[4985]: E0128 18:35:45.087416 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:35:45 crc kubenswrapper[4985]: E0128 18:35:45.087432 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:35:45 crc kubenswrapper[4985]: E0128 18:35:45.087483 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift podName:4b55b35c-0ef1-4db8-b435-24de7fda8ecc nodeName:}" failed. No retries permitted until 2026-01-28 18:35:53.087469603 +0000 UTC m=+1363.914032434 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift") pod "swift-storage-0" (UID: "4b55b35c-0ef1-4db8-b435-24de7fda8ecc") : configmap "swift-ring-files" not found Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.116025 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.307646 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.307681 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.360335 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.372060 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.781196 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 28 18:35:45 crc kubenswrapper[4985]: E0128 18:35:45.781972 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" containerName="init" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.781998 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" containerName="init" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.782558 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" containerName="init" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.785328 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.795603 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.799989 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.800282 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7rqdh" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.800433 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.811854 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913606 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a14385-7b25-48b8-8614-1a77892a1119-config\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913673 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znshz\" (UniqueName: \"kubernetes.io/projected/76a14385-7b25-48b8-8614-1a77892a1119-kube-api-access-znshz\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913707 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913762 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76a14385-7b25-48b8-8614-1a77892a1119-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913795 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913837 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913880 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76a14385-7b25-48b8-8614-1a77892a1119-scripts\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.015932 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a14385-7b25-48b8-8614-1a77892a1119-config\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.015996 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znshz\" (UniqueName: \"kubernetes.io/projected/76a14385-7b25-48b8-8614-1a77892a1119-kube-api-access-znshz\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.016024 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.016061 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76a14385-7b25-48b8-8614-1a77892a1119-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.016086 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.016127 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.016174 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76a14385-7b25-48b8-8614-1a77892a1119-scripts\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.016691 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76a14385-7b25-48b8-8614-1a77892a1119-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.017391 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76a14385-7b25-48b8-8614-1a77892a1119-scripts\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.017439 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a14385-7b25-48b8-8614-1a77892a1119-config\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.024225 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.027369 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.040244 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znshz\" (UniqueName: \"kubernetes.io/projected/76a14385-7b25-48b8-8614-1a77892a1119-kube-api-access-znshz\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.045058 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.130727 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.321767 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerStarted","Data":"2a94f1b22150bff413a35eb8a3eed5745a2369fd30defeeb03ec8e8bb54d93e7"} Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.882048 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.982905 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:47 crc kubenswrapper[4985]: I0128 18:35:47.720784 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:47 crc kubenswrapper[4985]: W0128 18:35:47.826808 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76a14385_7b25_48b8_8614_1a77892a1119.slice/crio-4ff942b196a891386363ab9cf92d0621b9bee9bd1a17f13ee4166170c805f2c5 WatchSource:0}: Error finding container 4ff942b196a891386363ab9cf92d0621b9bee9bd1a17f13ee4166170c805f2c5: Status 404 returned error can't find the container with id 4ff942b196a891386363ab9cf92d0621b9bee9bd1a17f13ee4166170c805f2c5 Jan 28 18:35:47 crc kubenswrapper[4985]: I0128 18:35:47.837970 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 18:35:48 crc kubenswrapper[4985]: I0128 18:35:48.351897 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"76a14385-7b25-48b8-8614-1a77892a1119","Type":"ContainerStarted","Data":"4ff942b196a891386363ab9cf92d0621b9bee9bd1a17f13ee4166170c805f2c5"} Jan 28 18:35:48 crc kubenswrapper[4985]: I0128 18:35:48.359406 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l4q82" event={"ID":"75109476-5e36-45b8-afb9-1e7f3a9331f9","Type":"ContainerStarted","Data":"d9984694685d646182db409a296c9eb34220178e5fa3648431bc4bdbe12a9c45"} Jan 28 18:35:48 crc kubenswrapper[4985]: I0128 18:35:48.363087 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6lq9x" event={"ID":"c0714595-ac9e-4945-9250-6f499317070d","Type":"ContainerStarted","Data":"00ae9927f05102567e126074090c38904675116334ef57365bcf6f128ff9bdcc"} Jan 28 18:35:48 crc kubenswrapper[4985]: I0128 18:35:48.363225 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-ring-rebalance-6lq9x" podUID="c0714595-ac9e-4945-9250-6f499317070d" containerName="swift-ring-rebalance" containerID="cri-o://00ae9927f05102567e126074090c38904675116334ef57365bcf6f128ff9bdcc" gracePeriod=30 Jan 28 18:35:48 crc kubenswrapper[4985]: I0128 18:35:48.388871 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-l4q82" podStartSLOduration=2.483675252 podStartE2EDuration="8.388845709s" podCreationTimestamp="2026-01-28 18:35:40 +0000 UTC" firstStartedPulling="2026-01-28 18:35:41.46490938 +0000 UTC m=+1352.291472201" lastFinishedPulling="2026-01-28 18:35:47.370079837 +0000 UTC m=+1358.196642658" observedRunningTime="2026-01-28 18:35:48.387846591 +0000 UTC m=+1359.214409432" watchObservedRunningTime="2026-01-28 18:35:48.388845709 +0000 UTC m=+1359.215408550" Jan 28 18:35:48 crc kubenswrapper[4985]: I0128 18:35:48.420023 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-6lq9x" podStartSLOduration=2.7201037059999997 podStartE2EDuration="8.420001448s" podCreationTimestamp="2026-01-28 18:35:40 +0000 UTC" firstStartedPulling="2026-01-28 18:35:41.680124986 +0000 UTC m=+1352.506687807" lastFinishedPulling="2026-01-28 18:35:47.380022728 +0000 UTC m=+1358.206585549" observedRunningTime="2026-01-28 18:35:48.405731706 +0000 UTC m=+1359.232294527" watchObservedRunningTime="2026-01-28 18:35:48.420001448 +0000 UTC m=+1359.246564269" Jan 28 18:35:50 crc kubenswrapper[4985]: I0128 18:35:50.539077 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:50 crc kubenswrapper[4985]: I0128 18:35:50.605746 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbd6h"] Jan 28 18:35:50 crc kubenswrapper[4985]: I0128 18:35:50.606047 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-sbd6h" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerName="dnsmasq-dns" containerID="cri-o://4fbdfdf2644365e56621c8dd65f4dc2403575c997b33777a83fc07aed15bfdce" gracePeriod=10 Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.077435 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.077847 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.220842 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fm4x7"] Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.223129 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.226882 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.231846 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fm4x7"] Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.383406 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lshj\" (UniqueName: \"kubernetes.io/projected/12f068aa-ed0a-47e7-9f95-16f86bf91343-kube-api-access-6lshj\") pod \"root-account-create-update-fm4x7\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.383464 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f068aa-ed0a-47e7-9f95-16f86bf91343-operator-scripts\") pod \"root-account-create-update-fm4x7\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.404968 4985 generic.go:334] "Generic (PLEG): container finished" podID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerID="4fbdfdf2644365e56621c8dd65f4dc2403575c997b33777a83fc07aed15bfdce" exitCode=0 Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.405016 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbd6h" event={"ID":"dadb283d-7f9f-414c-9017-f8c0875878ad","Type":"ContainerDied","Data":"4fbdfdf2644365e56621c8dd65f4dc2403575c997b33777a83fc07aed15bfdce"} Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.485937 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lshj\" (UniqueName: \"kubernetes.io/projected/12f068aa-ed0a-47e7-9f95-16f86bf91343-kube-api-access-6lshj\") pod \"root-account-create-update-fm4x7\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.486029 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f068aa-ed0a-47e7-9f95-16f86bf91343-operator-scripts\") pod \"root-account-create-update-fm4x7\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.487116 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f068aa-ed0a-47e7-9f95-16f86bf91343-operator-scripts\") pod \"root-account-create-update-fm4x7\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.524891 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lshj\" (UniqueName: \"kubernetes.io/projected/12f068aa-ed0a-47e7-9f95-16f86bf91343-kube-api-access-6lshj\") pod \"root-account-create-update-fm4x7\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.555008 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.764040 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.944495 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.184186 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-ksczb"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.185662 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.198764 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ksczb"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.255909 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fm4x7"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.274361 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1abf-account-create-update-fwwhm"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.275921 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.280638 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.286497 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1abf-account-create-update-fwwhm"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.300630 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.332304 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9900c5fe-8fec-452e-86cc-98d901c94329-operator-scripts\") pod \"keystone-db-create-ksczb\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.332653 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jncg5\" (UniqueName: \"kubernetes.io/projected/9900c5fe-8fec-452e-86cc-98d901c94329-kube-api-access-jncg5\") pod \"keystone-db-create-ksczb\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.428406 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fm4x7" event={"ID":"12f068aa-ed0a-47e7-9f95-16f86bf91343","Type":"ContainerStarted","Data":"8bd64f391002afc6ed3d23bed80d044acc414be4bab0351a66dfcef4e0f3f74c"} Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.442027 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.442043 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbd6h" event={"ID":"dadb283d-7f9f-414c-9017-f8c0875878ad","Type":"ContainerDied","Data":"8651fb5de970f4dd3ff0cc87b132ffe1891fcfecc007311983832fbce5848762"} Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.442107 4985 scope.go:117] "RemoveContainer" containerID="4fbdfdf2644365e56621c8dd65f4dc2403575c997b33777a83fc07aed15bfdce" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.447203 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-dns-svc\") pod \"dadb283d-7f9f-414c-9017-f8c0875878ad\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.447447 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcghp\" (UniqueName: \"kubernetes.io/projected/dadb283d-7f9f-414c-9017-f8c0875878ad-kube-api-access-mcghp\") pod \"dadb283d-7f9f-414c-9017-f8c0875878ad\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.447554 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-nb\") pod \"dadb283d-7f9f-414c-9017-f8c0875878ad\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.447681 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-sb\") pod \"dadb283d-7f9f-414c-9017-f8c0875878ad\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.447726 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-config\") pod \"dadb283d-7f9f-414c-9017-f8c0875878ad\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.450806 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rwlc\" (UniqueName: \"kubernetes.io/projected/e6004532-b8ab-4b69-9907-e7bd26c6735a-kube-api-access-7rwlc\") pod \"keystone-1abf-account-create-update-fwwhm\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.450931 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9900c5fe-8fec-452e-86cc-98d901c94329-operator-scripts\") pod \"keystone-db-create-ksczb\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.451038 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6004532-b8ab-4b69-9907-e7bd26c6735a-operator-scripts\") pod \"keystone-1abf-account-create-update-fwwhm\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.451242 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jncg5\" (UniqueName: \"kubernetes.io/projected/9900c5fe-8fec-452e-86cc-98d901c94329-kube-api-access-jncg5\") pod \"keystone-db-create-ksczb\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.453463 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dadb283d-7f9f-414c-9017-f8c0875878ad-kube-api-access-mcghp" (OuterVolumeSpecName: "kube-api-access-mcghp") pod "dadb283d-7f9f-414c-9017-f8c0875878ad" (UID: "dadb283d-7f9f-414c-9017-f8c0875878ad"). InnerVolumeSpecName "kube-api-access-mcghp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.454520 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9900c5fe-8fec-452e-86cc-98d901c94329-operator-scripts\") pod \"keystone-db-create-ksczb\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.458712 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcghp\" (UniqueName: \"kubernetes.io/projected/dadb283d-7f9f-414c-9017-f8c0875878ad-kube-api-access-mcghp\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.468460 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9qd5p"] Jan 28 18:35:52 crc kubenswrapper[4985]: E0128 18:35:52.468993 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerName="init" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.469018 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerName="init" Jan 28 18:35:52 crc kubenswrapper[4985]: E0128 18:35:52.469039 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerName="dnsmasq-dns" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.469047 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerName="dnsmasq-dns" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.469346 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerName="dnsmasq-dns" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.470134 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.474057 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jncg5\" (UniqueName: \"kubernetes.io/projected/9900c5fe-8fec-452e-86cc-98d901c94329-kube-api-access-jncg5\") pod \"keystone-db-create-ksczb\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.478467 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9qd5p"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.480078 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-64878fb8f-ljltp" podUID="0d2b3a75-cb2e-41a2-9005-a72a8aebb818" containerName="console" containerID="cri-o://c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8" gracePeriod=15 Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.480307 4985 scope.go:117] "RemoveContainer" containerID="68193873dff4bd6a35834f28702dce0fa7f1463ec5af6dd5571aab6e1aa60d3d" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.528058 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dadb283d-7f9f-414c-9017-f8c0875878ad" (UID: "dadb283d-7f9f-414c-9017-f8c0875878ad"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.560508 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rwlc\" (UniqueName: \"kubernetes.io/projected/e6004532-b8ab-4b69-9907-e7bd26c6735a-kube-api-access-7rwlc\") pod \"keystone-1abf-account-create-update-fwwhm\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.560636 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6004532-b8ab-4b69-9907-e7bd26c6735a-operator-scripts\") pod \"keystone-1abf-account-create-update-fwwhm\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.564373 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.564847 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6004532-b8ab-4b69-9907-e7bd26c6735a-operator-scripts\") pod \"keystone-1abf-account-create-update-fwwhm\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.573906 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3e6a-account-create-update-ktg62"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.575446 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.578529 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.579371 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-config" (OuterVolumeSpecName: "config") pod "dadb283d-7f9f-414c-9017-f8c0875878ad" (UID: "dadb283d-7f9f-414c-9017-f8c0875878ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.579793 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dadb283d-7f9f-414c-9017-f8c0875878ad" (UID: "dadb283d-7f9f-414c-9017-f8c0875878ad"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.594121 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dadb283d-7f9f-414c-9017-f8c0875878ad" (UID: "dadb283d-7f9f-414c-9017-f8c0875878ad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.595807 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rwlc\" (UniqueName: \"kubernetes.io/projected/e6004532-b8ab-4b69-9907-e7bd26c6735a-kube-api-access-7rwlc\") pod \"keystone-1abf-account-create-update-fwwhm\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.608972 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3e6a-account-create-update-ktg62"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.647525 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.654357 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.669217 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2755f3-fac4-4f0b-9afb-a449f1587d11-operator-scripts\") pod \"placement-db-create-9qd5p\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.669373 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797f7\" (UniqueName: \"kubernetes.io/projected/8c2755f3-fac4-4f0b-9afb-a449f1587d11-kube-api-access-797f7\") pod \"placement-db-create-9qd5p\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.669560 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.669574 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.669588 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.772762 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346cb311-0387-4c85-9827-e0091b1e6bcd-operator-scripts\") pod \"placement-3e6a-account-create-update-ktg62\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.772830 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2755f3-fac4-4f0b-9afb-a449f1587d11-operator-scripts\") pod \"placement-db-create-9qd5p\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.773234 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s5bb\" (UniqueName: \"kubernetes.io/projected/346cb311-0387-4c85-9827-e0091b1e6bcd-kube-api-access-2s5bb\") pod \"placement-3e6a-account-create-update-ktg62\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.773552 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-797f7\" (UniqueName: \"kubernetes.io/projected/8c2755f3-fac4-4f0b-9afb-a449f1587d11-kube-api-access-797f7\") pod \"placement-db-create-9qd5p\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.773672 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2755f3-fac4-4f0b-9afb-a449f1587d11-operator-scripts\") pod \"placement-db-create-9qd5p\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.806084 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-797f7\" (UniqueName: \"kubernetes.io/projected/8c2755f3-fac4-4f0b-9afb-a449f1587d11-kube-api-access-797f7\") pod \"placement-db-create-9qd5p\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.851015 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-z2jgs"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.857672 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.863497 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-z2jgs"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.878322 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346cb311-0387-4c85-9827-e0091b1e6bcd-operator-scripts\") pod \"placement-3e6a-account-create-update-ktg62\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.878447 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s5bb\" (UniqueName: \"kubernetes.io/projected/346cb311-0387-4c85-9827-e0091b1e6bcd-kube-api-access-2s5bb\") pod \"placement-3e6a-account-create-update-ktg62\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.880758 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346cb311-0387-4c85-9827-e0091b1e6bcd-operator-scripts\") pod \"placement-3e6a-account-create-update-ktg62\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.898019 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s5bb\" (UniqueName: \"kubernetes.io/projected/346cb311-0387-4c85-9827-e0091b1e6bcd-kube-api-access-2s5bb\") pod \"placement-3e6a-account-create-update-ktg62\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.906476 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.985142 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cbkz\" (UniqueName: \"kubernetes.io/projected/1a24a5c2-4c45-43dd-a957-253323fed4d5-kube-api-access-7cbkz\") pod \"glance-db-create-z2jgs\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.985377 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a24a5c2-4c45-43dd-a957-253323fed4d5-operator-scripts\") pod \"glance-db-create-z2jgs\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.996710 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7fd1-account-create-update-tlhk7"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.998174 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:52.999945 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.006130 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7fd1-account-create-update-tlhk7"] Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.086903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a24a5c2-4c45-43dd-a957-253323fed4d5-operator-scripts\") pod \"glance-db-create-z2jgs\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.087089 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljjz4\" (UniqueName: \"kubernetes.io/projected/4adf60c6-4008-4f41-a60b-cf10db1657cf-kube-api-access-ljjz4\") pod \"glance-7fd1-account-create-update-tlhk7\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.087134 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adf60c6-4008-4f41-a60b-cf10db1657cf-operator-scripts\") pod \"glance-7fd1-account-create-update-tlhk7\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.087173 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cbkz\" (UniqueName: \"kubernetes.io/projected/1a24a5c2-4c45-43dd-a957-253323fed4d5-kube-api-access-7cbkz\") pod \"glance-db-create-z2jgs\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.087973 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a24a5c2-4c45-43dd-a957-253323fed4d5-operator-scripts\") pod \"glance-db-create-z2jgs\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.114548 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cbkz\" (UniqueName: \"kubernetes.io/projected/1a24a5c2-4c45-43dd-a957-253323fed4d5-kube-api-access-7cbkz\") pod \"glance-db-create-z2jgs\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.116758 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.189886 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.189943 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljjz4\" (UniqueName: \"kubernetes.io/projected/4adf60c6-4008-4f41-a60b-cf10db1657cf-kube-api-access-ljjz4\") pod \"glance-7fd1-account-create-update-tlhk7\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.190016 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adf60c6-4008-4f41-a60b-cf10db1657cf-operator-scripts\") pod \"glance-7fd1-account-create-update-tlhk7\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: E0128 18:35:53.190107 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:35:53 crc kubenswrapper[4985]: E0128 18:35:53.190136 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:35:53 crc kubenswrapper[4985]: E0128 18:35:53.190194 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift podName:4b55b35c-0ef1-4db8-b435-24de7fda8ecc nodeName:}" failed. No retries permitted until 2026-01-28 18:36:09.190177102 +0000 UTC m=+1380.016739923 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift") pod "swift-storage-0" (UID: "4b55b35c-0ef1-4db8-b435-24de7fda8ecc") : configmap "swift-ring-files" not found Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.191433 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adf60c6-4008-4f41-a60b-cf10db1657cf-operator-scripts\") pod \"glance-7fd1-account-create-update-tlhk7\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.212043 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljjz4\" (UniqueName: \"kubernetes.io/projected/4adf60c6-4008-4f41-a60b-cf10db1657cf-kube-api-access-ljjz4\") pod \"glance-7fd1-account-create-update-tlhk7\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.280265 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.288296 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbd6h"] Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.288339 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbd6h"] Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.294474 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.309585 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64878fb8f-ljltp_0d2b3a75-cb2e-41a2-9005-a72a8aebb818/console/0.log" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.309651 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.402864 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-serving-cert\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.402958 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-trusted-ca-bundle\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.402999 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-oauth-serving-cert\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.403073 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-config\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.403290 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-service-ca\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.403409 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpv67\" (UniqueName: \"kubernetes.io/projected/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-kube-api-access-cpv67\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.403450 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-oauth-config\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.407126 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.407617 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.408345 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-config" (OuterVolumeSpecName: "console-config") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.410804 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-service-ca" (OuterVolumeSpecName: "service-ca") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.413176 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.416358 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-kube-api-access-cpv67" (OuterVolumeSpecName: "kube-api-access-cpv67") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "kube-api-access-cpv67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.426611 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.475942 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1abf-account-create-update-fwwhm"] Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.483851 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"76a14385-7b25-48b8-8614-1a77892a1119","Type":"ContainerStarted","Data":"6857e6477c043d09d8a7adde771c8aa2d521d7a625e2cbad40fe527cba92acba"} Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.483885 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"76a14385-7b25-48b8-8614-1a77892a1119","Type":"ContainerStarted","Data":"09facf0b5f7f7b955017702e5f0cca1614271f1db9b3f6b6134d147566e4189f"} Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.484524 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.491463 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ksczb"] Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508465 4985 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508505 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508521 4985 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508533 4985 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508553 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508564 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpv67\" (UniqueName: \"kubernetes.io/projected/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-kube-api-access-cpv67\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508575 4985 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.514797 4985 generic.go:334] "Generic (PLEG): container finished" podID="12f068aa-ed0a-47e7-9f95-16f86bf91343" containerID="e79b0c26c13e421f90b1e346a7a6ed37fdf036d779d67dcae2b50acce53ce0c6" exitCode=0 Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.514874 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fm4x7" event={"ID":"12f068aa-ed0a-47e7-9f95-16f86bf91343","Type":"ContainerDied","Data":"e79b0c26c13e421f90b1e346a7a6ed37fdf036d779d67dcae2b50acce53ce0c6"} Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.522519 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.389896532 podStartE2EDuration="8.522504164s" podCreationTimestamp="2026-01-28 18:35:45 +0000 UTC" firstStartedPulling="2026-01-28 18:35:47.830441825 +0000 UTC m=+1358.657004646" lastFinishedPulling="2026-01-28 18:35:51.963049457 +0000 UTC m=+1362.789612278" observedRunningTime="2026-01-28 18:35:53.511934076 +0000 UTC m=+1364.338496897" watchObservedRunningTime="2026-01-28 18:35:53.522504164 +0000 UTC m=+1364.349066985" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.546403 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64878fb8f-ljltp_0d2b3a75-cb2e-41a2-9005-a72a8aebb818/console/0.log" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.546449 4985 generic.go:334] "Generic (PLEG): container finished" podID="0d2b3a75-cb2e-41a2-9005-a72a8aebb818" containerID="c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8" exitCode=2 Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.546479 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64878fb8f-ljltp" event={"ID":"0d2b3a75-cb2e-41a2-9005-a72a8aebb818","Type":"ContainerDied","Data":"c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8"} Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.546505 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64878fb8f-ljltp" event={"ID":"0d2b3a75-cb2e-41a2-9005-a72a8aebb818","Type":"ContainerDied","Data":"5a102b8490fbf118bf29ead080a5a651f553a5218e77ce9190605ec1fabffe5e"} Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.546521 4985 scope.go:117] "RemoveContainer" containerID="c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.546600 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.617003 4985 scope.go:117] "RemoveContainer" containerID="c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8" Jan 28 18:35:53 crc kubenswrapper[4985]: E0128 18:35:53.621032 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8\": container with ID starting with c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8 not found: ID does not exist" containerID="c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.621094 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8"} err="failed to get container status \"c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8\": rpc error: code = NotFound desc = could not find container \"c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8\": container with ID starting with c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8 not found: ID does not exist" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.623456 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64878fb8f-ljltp"] Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.636117 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-64878fb8f-ljltp"] Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.169907 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-z2jgs"] Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.180910 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9qd5p"] Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.191538 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3e6a-account-create-update-ktg62"] Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.199665 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7fd1-account-create-update-tlhk7"] Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.561471 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z2jgs" event={"ID":"1a24a5c2-4c45-43dd-a957-253323fed4d5","Type":"ContainerStarted","Data":"b5b1a4710b8858945982e3f5911ca4fd86e8a7dae739eb3659e4c396927b6955"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.561512 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z2jgs" event={"ID":"1a24a5c2-4c45-43dd-a957-253323fed4d5","Type":"ContainerStarted","Data":"6d9b1c199f1062535f568d8f45dde873fe42b5b81b0f1392ff76e0211f842360"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.565860 4985 generic.go:334] "Generic (PLEG): container finished" podID="96162e6f-966d-438d-9362-ef03abc4b277" containerID="2a94f1b22150bff413a35eb8a3eed5745a2369fd30defeeb03ec8e8bb54d93e7" exitCode=0 Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.565962 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerDied","Data":"2a94f1b22150bff413a35eb8a3eed5745a2369fd30defeeb03ec8e8bb54d93e7"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.570374 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3e6a-account-create-update-ktg62" event={"ID":"346cb311-0387-4c85-9827-e0091b1e6bcd","Type":"ContainerStarted","Data":"521672f13c59cc25ffac94ddae42298d333bbe43930229a9ebba2d7ae20a8b6d"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.570423 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3e6a-account-create-update-ktg62" event={"ID":"346cb311-0387-4c85-9827-e0091b1e6bcd","Type":"ContainerStarted","Data":"bb09edc01a4c3afb4449a4dacb7ab86a9a7a6e0d155a46be22553034c547ae03"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.576089 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fd1-account-create-update-tlhk7" event={"ID":"4adf60c6-4008-4f41-a60b-cf10db1657cf","Type":"ContainerStarted","Data":"7b723368d435c52066b70f7b63bb7ce17848129ed979021f777f40ce02cde0ea"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.576148 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fd1-account-create-update-tlhk7" event={"ID":"4adf60c6-4008-4f41-a60b-cf10db1657cf","Type":"ContainerStarted","Data":"2b9e72b871ae9726c48909179e5d8e9383458a61e82e6086b4c9d2eaeaa79c60"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.576709 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-z2jgs" podStartSLOduration=2.5766934470000002 podStartE2EDuration="2.576693447s" podCreationTimestamp="2026-01-28 18:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:54.574168775 +0000 UTC m=+1365.400731596" watchObservedRunningTime="2026-01-28 18:35:54.576693447 +0000 UTC m=+1365.403256268" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.581109 4985 generic.go:334] "Generic (PLEG): container finished" podID="e6004532-b8ab-4b69-9907-e7bd26c6735a" containerID="3060e8923564aa30fd03bf66b3d5bcff3578ea99d0b7eb76a560b9022326b58d" exitCode=0 Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.581171 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1abf-account-create-update-fwwhm" event={"ID":"e6004532-b8ab-4b69-9907-e7bd26c6735a","Type":"ContainerDied","Data":"3060e8923564aa30fd03bf66b3d5bcff3578ea99d0b7eb76a560b9022326b58d"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.581213 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1abf-account-create-update-fwwhm" event={"ID":"e6004532-b8ab-4b69-9907-e7bd26c6735a","Type":"ContainerStarted","Data":"f24ff43e9c1efa3a7fc1289bc1ab6b77ffa3e1a45be1121c6dcc1ee3c4ef0fb9"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.583200 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qd5p" event={"ID":"8c2755f3-fac4-4f0b-9afb-a449f1587d11","Type":"ContainerStarted","Data":"609eafe7485b15327ad2db6af8fea1da5eeeb224da5b54e1005034d41800fc19"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.583237 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qd5p" event={"ID":"8c2755f3-fac4-4f0b-9afb-a449f1587d11","Type":"ContainerStarted","Data":"189015c56b26a2946bc608b7b573f5ccb4f5e157b8c0ad9b525476261a7b20ac"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.585500 4985 generic.go:334] "Generic (PLEG): container finished" podID="9900c5fe-8fec-452e-86cc-98d901c94329" containerID="a5fdb593967057491cb666085c46aac8c70a1408fffafe7d2ec91a2157ba041a" exitCode=0 Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.585636 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ksczb" event={"ID":"9900c5fe-8fec-452e-86cc-98d901c94329","Type":"ContainerDied","Data":"a5fdb593967057491cb666085c46aac8c70a1408fffafe7d2ec91a2157ba041a"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.585714 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ksczb" event={"ID":"9900c5fe-8fec-452e-86cc-98d901c94329","Type":"ContainerStarted","Data":"27094ed44a1a823e00c87afc7c6b6780c4e13b4f03410388f06fe7b875da5910"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.597819 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-3e6a-account-create-update-ktg62" podStartSLOduration=2.597799243 podStartE2EDuration="2.597799243s" podCreationTimestamp="2026-01-28 18:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:54.597052672 +0000 UTC m=+1365.423615493" watchObservedRunningTime="2026-01-28 18:35:54.597799243 +0000 UTC m=+1365.424362064" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.642188 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7fd1-account-create-update-tlhk7" podStartSLOduration=2.642169545 podStartE2EDuration="2.642169545s" podCreationTimestamp="2026-01-28 18:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:54.634631183 +0000 UTC m=+1365.461194004" watchObservedRunningTime="2026-01-28 18:35:54.642169545 +0000 UTC m=+1365.468732356" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.725786 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-9qd5p" podStartSLOduration=2.725766035 podStartE2EDuration="2.725766035s" podCreationTimestamp="2026-01-28 18:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:54.682620497 +0000 UTC m=+1365.509183318" watchObservedRunningTime="2026-01-28 18:35:54.725766035 +0000 UTC m=+1365.552328856" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.864590 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-kwqd8"] Jan 28 18:35:54 crc kubenswrapper[4985]: E0128 18:35:54.865314 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2b3a75-cb2e-41a2-9005-a72a8aebb818" containerName="console" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.865341 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2b3a75-cb2e-41a2-9005-a72a8aebb818" containerName="console" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.865602 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d2b3a75-cb2e-41a2-9005-a72a8aebb818" containerName="console" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.866649 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.892002 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-kwqd8"] Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.054225 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fgct\" (UniqueName: \"kubernetes.io/projected/9193a306-03fe-41ae-8b93-2851b08c73fb-kube-api-access-8fgct\") pod \"mysqld-exporter-openstack-db-create-kwqd8\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.054388 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9193a306-03fe-41ae-8b93-2851b08c73fb-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-kwqd8\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.071589 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-53b2-account-create-update-qhkg4"] Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.072951 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.074649 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.084121 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-53b2-account-create-update-qhkg4"] Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.097328 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.156686 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fgct\" (UniqueName: \"kubernetes.io/projected/9193a306-03fe-41ae-8b93-2851b08c73fb-kube-api-access-8fgct\") pod \"mysqld-exporter-openstack-db-create-kwqd8\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.156792 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9193a306-03fe-41ae-8b93-2851b08c73fb-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-kwqd8\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.157568 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9193a306-03fe-41ae-8b93-2851b08c73fb-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-kwqd8\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.187941 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fgct\" (UniqueName: \"kubernetes.io/projected/9193a306-03fe-41ae-8b93-2851b08c73fb-kube-api-access-8fgct\") pod \"mysqld-exporter-openstack-db-create-kwqd8\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.259145 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lshj\" (UniqueName: \"kubernetes.io/projected/12f068aa-ed0a-47e7-9f95-16f86bf91343-kube-api-access-6lshj\") pod \"12f068aa-ed0a-47e7-9f95-16f86bf91343\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.259467 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f068aa-ed0a-47e7-9f95-16f86bf91343-operator-scripts\") pod \"12f068aa-ed0a-47e7-9f95-16f86bf91343\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.260075 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whr5p\" (UniqueName: \"kubernetes.io/projected/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-kube-api-access-whr5p\") pod \"mysqld-exporter-53b2-account-create-update-qhkg4\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.260115 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-operator-scripts\") pod \"mysqld-exporter-53b2-account-create-update-qhkg4\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.260450 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12f068aa-ed0a-47e7-9f95-16f86bf91343-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12f068aa-ed0a-47e7-9f95-16f86bf91343" (UID: "12f068aa-ed0a-47e7-9f95-16f86bf91343"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.263809 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12f068aa-ed0a-47e7-9f95-16f86bf91343-kube-api-access-6lshj" (OuterVolumeSpecName: "kube-api-access-6lshj") pod "12f068aa-ed0a-47e7-9f95-16f86bf91343" (UID: "12f068aa-ed0a-47e7-9f95-16f86bf91343"). InnerVolumeSpecName "kube-api-access-6lshj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.280139 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d2b3a75-cb2e-41a2-9005-a72a8aebb818" path="/var/lib/kubelet/pods/0d2b3a75-cb2e-41a2-9005-a72a8aebb818/volumes" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.281542 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" path="/var/lib/kubelet/pods/dadb283d-7f9f-414c-9017-f8c0875878ad/volumes" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.362769 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whr5p\" (UniqueName: \"kubernetes.io/projected/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-kube-api-access-whr5p\") pod \"mysqld-exporter-53b2-account-create-update-qhkg4\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.363234 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-operator-scripts\") pod \"mysqld-exporter-53b2-account-create-update-qhkg4\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.364566 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lshj\" (UniqueName: \"kubernetes.io/projected/12f068aa-ed0a-47e7-9f95-16f86bf91343-kube-api-access-6lshj\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.364599 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f068aa-ed0a-47e7-9f95-16f86bf91343-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.386458 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whr5p\" (UniqueName: \"kubernetes.io/projected/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-kube-api-access-whr5p\") pod \"mysqld-exporter-53b2-account-create-update-qhkg4\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.392121 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.405615 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-operator-scripts\") pod \"mysqld-exporter-53b2-account-create-update-qhkg4\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.410370 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.603900 4985 generic.go:334] "Generic (PLEG): container finished" podID="8c2755f3-fac4-4f0b-9afb-a449f1587d11" containerID="609eafe7485b15327ad2db6af8fea1da5eeeb224da5b54e1005034d41800fc19" exitCode=0 Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.604461 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qd5p" event={"ID":"8c2755f3-fac4-4f0b-9afb-a449f1587d11","Type":"ContainerDied","Data":"609eafe7485b15327ad2db6af8fea1da5eeeb224da5b54e1005034d41800fc19"} Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.615153 4985 generic.go:334] "Generic (PLEG): container finished" podID="1a24a5c2-4c45-43dd-a957-253323fed4d5" containerID="b5b1a4710b8858945982e3f5911ca4fd86e8a7dae739eb3659e4c396927b6955" exitCode=0 Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.615218 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z2jgs" event={"ID":"1a24a5c2-4c45-43dd-a957-253323fed4d5","Type":"ContainerDied","Data":"b5b1a4710b8858945982e3f5911ca4fd86e8a7dae739eb3659e4c396927b6955"} Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.619286 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fm4x7" event={"ID":"12f068aa-ed0a-47e7-9f95-16f86bf91343","Type":"ContainerDied","Data":"8bd64f391002afc6ed3d23bed80d044acc414be4bab0351a66dfcef4e0f3f74c"} Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.619349 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bd64f391002afc6ed3d23bed80d044acc414be4bab0351a66dfcef4e0f3f74c" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.619446 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.626892 4985 generic.go:334] "Generic (PLEG): container finished" podID="346cb311-0387-4c85-9827-e0091b1e6bcd" containerID="521672f13c59cc25ffac94ddae42298d333bbe43930229a9ebba2d7ae20a8b6d" exitCode=0 Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.627132 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3e6a-account-create-update-ktg62" event={"ID":"346cb311-0387-4c85-9827-e0091b1e6bcd","Type":"ContainerDied","Data":"521672f13c59cc25ffac94ddae42298d333bbe43930229a9ebba2d7ae20a8b6d"} Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.634691 4985 generic.go:334] "Generic (PLEG): container finished" podID="4adf60c6-4008-4f41-a60b-cf10db1657cf" containerID="7b723368d435c52066b70f7b63bb7ce17848129ed979021f777f40ce02cde0ea" exitCode=0 Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.634756 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fd1-account-create-update-tlhk7" event={"ID":"4adf60c6-4008-4f41-a60b-cf10db1657cf","Type":"ContainerDied","Data":"7b723368d435c52066b70f7b63bb7ce17848129ed979021f777f40ce02cde0ea"} Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.872811 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-kwqd8"] Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.228423 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-53b2-account-create-update-qhkg4"] Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.383610 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.389974 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.494177 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rwlc\" (UniqueName: \"kubernetes.io/projected/e6004532-b8ab-4b69-9907-e7bd26c6735a-kube-api-access-7rwlc\") pod \"e6004532-b8ab-4b69-9907-e7bd26c6735a\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.494316 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jncg5\" (UniqueName: \"kubernetes.io/projected/9900c5fe-8fec-452e-86cc-98d901c94329-kube-api-access-jncg5\") pod \"9900c5fe-8fec-452e-86cc-98d901c94329\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.494535 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9900c5fe-8fec-452e-86cc-98d901c94329-operator-scripts\") pod \"9900c5fe-8fec-452e-86cc-98d901c94329\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.494683 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6004532-b8ab-4b69-9907-e7bd26c6735a-operator-scripts\") pod \"e6004532-b8ab-4b69-9907-e7bd26c6735a\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.497234 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9900c5fe-8fec-452e-86cc-98d901c94329-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9900c5fe-8fec-452e-86cc-98d901c94329" (UID: "9900c5fe-8fec-452e-86cc-98d901c94329"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.497910 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6004532-b8ab-4b69-9907-e7bd26c6735a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6004532-b8ab-4b69-9907-e7bd26c6735a" (UID: "e6004532-b8ab-4b69-9907-e7bd26c6735a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.517524 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6004532-b8ab-4b69-9907-e7bd26c6735a-kube-api-access-7rwlc" (OuterVolumeSpecName: "kube-api-access-7rwlc") pod "e6004532-b8ab-4b69-9907-e7bd26c6735a" (UID: "e6004532-b8ab-4b69-9907-e7bd26c6735a"). InnerVolumeSpecName "kube-api-access-7rwlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.517613 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9900c5fe-8fec-452e-86cc-98d901c94329-kube-api-access-jncg5" (OuterVolumeSpecName: "kube-api-access-jncg5") pod "9900c5fe-8fec-452e-86cc-98d901c94329" (UID: "9900c5fe-8fec-452e-86cc-98d901c94329"). InnerVolumeSpecName "kube-api-access-jncg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.599939 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rwlc\" (UniqueName: \"kubernetes.io/projected/e6004532-b8ab-4b69-9907-e7bd26c6735a-kube-api-access-7rwlc\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.599986 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jncg5\" (UniqueName: \"kubernetes.io/projected/9900c5fe-8fec-452e-86cc-98d901c94329-kube-api-access-jncg5\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.600000 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9900c5fe-8fec-452e-86cc-98d901c94329-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.600012 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6004532-b8ab-4b69-9907-e7bd26c6735a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.670907 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1abf-account-create-update-fwwhm" event={"ID":"e6004532-b8ab-4b69-9907-e7bd26c6735a","Type":"ContainerDied","Data":"f24ff43e9c1efa3a7fc1289bc1ab6b77ffa3e1a45be1121c6dcc1ee3c4ef0fb9"} Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.670969 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f24ff43e9c1efa3a7fc1289bc1ab6b77ffa3e1a45be1121c6dcc1ee3c4ef0fb9" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.671072 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.682633 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ksczb" event={"ID":"9900c5fe-8fec-452e-86cc-98d901c94329","Type":"ContainerDied","Data":"27094ed44a1a823e00c87afc7c6b6780c4e13b4f03410388f06fe7b875da5910"} Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.682674 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27094ed44a1a823e00c87afc7c6b6780c4e13b4f03410388f06fe7b875da5910" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.682741 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.686372 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" event={"ID":"dbefdfab-0ef2-4f71-9e9c-412c4dd87886","Type":"ContainerStarted","Data":"cecab7e544d7d4e5d190c44116d919bb9260ba70670cc5c4245efeb8c2adb050"} Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.686607 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" event={"ID":"dbefdfab-0ef2-4f71-9e9c-412c4dd87886","Type":"ContainerStarted","Data":"9e2efe46034044851f5a3e637e431cf9ea43affccfac6f4e797b1d360ae90de8"} Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.695473 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" event={"ID":"9193a306-03fe-41ae-8b93-2851b08c73fb","Type":"ContainerStarted","Data":"dac80678a434994386297bfe622d70833a87d9d21510a5da7f0de00c71f32e28"} Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.695528 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" event={"ID":"9193a306-03fe-41ae-8b93-2851b08c73fb","Type":"ContainerStarted","Data":"bbbe3861e112c80337ea958edc9df2015e30e5d8f56b8fda15972e6b8bc59e33"} Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.713325 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" podStartSLOduration=1.713294688 podStartE2EDuration="1.713294688s" podCreationTimestamp="2026-01-28 18:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:56.705392405 +0000 UTC m=+1367.531955226" watchObservedRunningTime="2026-01-28 18:35:56.713294688 +0000 UTC m=+1367.539857519" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.749737 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" podStartSLOduration=2.749704716 podStartE2EDuration="2.749704716s" podCreationTimestamp="2026-01-28 18:35:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:56.735536776 +0000 UTC m=+1367.562099597" watchObservedRunningTime="2026-01-28 18:35:56.749704716 +0000 UTC m=+1367.576267527" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.197267 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.288049 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-797f7\" (UniqueName: \"kubernetes.io/projected/8c2755f3-fac4-4f0b-9afb-a449f1587d11-kube-api-access-797f7\") pod \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.288953 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2755f3-fac4-4f0b-9afb-a449f1587d11-operator-scripts\") pod \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.289916 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2755f3-fac4-4f0b-9afb-a449f1587d11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c2755f3-fac4-4f0b-9afb-a449f1587d11" (UID: "8c2755f3-fac4-4f0b-9afb-a449f1587d11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.294807 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2755f3-fac4-4f0b-9afb-a449f1587d11-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.319763 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2755f3-fac4-4f0b-9afb-a449f1587d11-kube-api-access-797f7" (OuterVolumeSpecName: "kube-api-access-797f7") pod "8c2755f3-fac4-4f0b-9afb-a449f1587d11" (UID: "8c2755f3-fac4-4f0b-9afb-a449f1587d11"). InnerVolumeSpecName "kube-api-access-797f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.396349 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-797f7\" (UniqueName: \"kubernetes.io/projected/8c2755f3-fac4-4f0b-9afb-a449f1587d11-kube-api-access-797f7\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.608274 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.618021 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.631346 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.704603 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a24a5c2-4c45-43dd-a957-253323fed4d5-operator-scripts\") pod \"1a24a5c2-4c45-43dd-a957-253323fed4d5\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.705032 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a24a5c2-4c45-43dd-a957-253323fed4d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a24a5c2-4c45-43dd-a957-253323fed4d5" (UID: "1a24a5c2-4c45-43dd-a957-253323fed4d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.705109 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cbkz\" (UniqueName: \"kubernetes.io/projected/1a24a5c2-4c45-43dd-a957-253323fed4d5-kube-api-access-7cbkz\") pod \"1a24a5c2-4c45-43dd-a957-253323fed4d5\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.705613 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljjz4\" (UniqueName: \"kubernetes.io/projected/4adf60c6-4008-4f41-a60b-cf10db1657cf-kube-api-access-ljjz4\") pod \"4adf60c6-4008-4f41-a60b-cf10db1657cf\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.705691 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346cb311-0387-4c85-9827-e0091b1e6bcd-operator-scripts\") pod \"346cb311-0387-4c85-9827-e0091b1e6bcd\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.705719 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s5bb\" (UniqueName: \"kubernetes.io/projected/346cb311-0387-4c85-9827-e0091b1e6bcd-kube-api-access-2s5bb\") pod \"346cb311-0387-4c85-9827-e0091b1e6bcd\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.705762 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adf60c6-4008-4f41-a60b-cf10db1657cf-operator-scripts\") pod \"4adf60c6-4008-4f41-a60b-cf10db1657cf\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.706381 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/346cb311-0387-4c85-9827-e0091b1e6bcd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "346cb311-0387-4c85-9827-e0091b1e6bcd" (UID: "346cb311-0387-4c85-9827-e0091b1e6bcd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.706784 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4adf60c6-4008-4f41-a60b-cf10db1657cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4adf60c6-4008-4f41-a60b-cf10db1657cf" (UID: "4adf60c6-4008-4f41-a60b-cf10db1657cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.707062 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a24a5c2-4c45-43dd-a957-253323fed4d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.707085 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346cb311-0387-4c85-9827-e0091b1e6bcd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.707095 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adf60c6-4008-4f41-a60b-cf10db1657cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.708671 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z2jgs" event={"ID":"1a24a5c2-4c45-43dd-a957-253323fed4d5","Type":"ContainerDied","Data":"6d9b1c199f1062535f568d8f45dde873fe42b5b81b0f1392ff76e0211f842360"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.708712 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d9b1c199f1062535f568d8f45dde873fe42b5b81b0f1392ff76e0211f842360" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.708771 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.708861 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4adf60c6-4008-4f41-a60b-cf10db1657cf-kube-api-access-ljjz4" (OuterVolumeSpecName: "kube-api-access-ljjz4") pod "4adf60c6-4008-4f41-a60b-cf10db1657cf" (UID: "4adf60c6-4008-4f41-a60b-cf10db1657cf"). InnerVolumeSpecName "kube-api-access-ljjz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.709779 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346cb311-0387-4c85-9827-e0091b1e6bcd-kube-api-access-2s5bb" (OuterVolumeSpecName: "kube-api-access-2s5bb") pod "346cb311-0387-4c85-9827-e0091b1e6bcd" (UID: "346cb311-0387-4c85-9827-e0091b1e6bcd"). InnerVolumeSpecName "kube-api-access-2s5bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.709819 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a24a5c2-4c45-43dd-a957-253323fed4d5-kube-api-access-7cbkz" (OuterVolumeSpecName: "kube-api-access-7cbkz") pod "1a24a5c2-4c45-43dd-a957-253323fed4d5" (UID: "1a24a5c2-4c45-43dd-a957-253323fed4d5"). InnerVolumeSpecName "kube-api-access-7cbkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.712013 4985 generic.go:334] "Generic (PLEG): container finished" podID="c0714595-ac9e-4945-9250-6f499317070d" containerID="00ae9927f05102567e126074090c38904675116334ef57365bcf6f128ff9bdcc" exitCode=0 Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.712053 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6lq9x" event={"ID":"c0714595-ac9e-4945-9250-6f499317070d","Type":"ContainerDied","Data":"00ae9927f05102567e126074090c38904675116334ef57365bcf6f128ff9bdcc"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.712360 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.722583 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3e6a-account-create-update-ktg62" event={"ID":"346cb311-0387-4c85-9827-e0091b1e6bcd","Type":"ContainerDied","Data":"bb09edc01a4c3afb4449a4dacb7ab86a9a7a6e0d155a46be22553034c547ae03"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.722624 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb09edc01a4c3afb4449a4dacb7ab86a9a7a6e0d155a46be22553034c547ae03" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.722671 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.727147 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fd1-account-create-update-tlhk7" event={"ID":"4adf60c6-4008-4f41-a60b-cf10db1657cf","Type":"ContainerDied","Data":"2b9e72b871ae9726c48909179e5d8e9383458a61e82e6086b4c9d2eaeaa79c60"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.727196 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b9e72b871ae9726c48909179e5d8e9383458a61e82e6086b4c9d2eaeaa79c60" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.727283 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.741080 4985 generic.go:334] "Generic (PLEG): container finished" podID="dbefdfab-0ef2-4f71-9e9c-412c4dd87886" containerID="cecab7e544d7d4e5d190c44116d919bb9260ba70670cc5c4245efeb8c2adb050" exitCode=0 Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.741134 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" event={"ID":"dbefdfab-0ef2-4f71-9e9c-412c4dd87886","Type":"ContainerDied","Data":"cecab7e544d7d4e5d190c44116d919bb9260ba70670cc5c4245efeb8c2adb050"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.745706 4985 generic.go:334] "Generic (PLEG): container finished" podID="9193a306-03fe-41ae-8b93-2851b08c73fb" containerID="dac80678a434994386297bfe622d70833a87d9d21510a5da7f0de00c71f32e28" exitCode=0 Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.745768 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" event={"ID":"9193a306-03fe-41ae-8b93-2851b08c73fb","Type":"ContainerDied","Data":"dac80678a434994386297bfe622d70833a87d9d21510a5da7f0de00c71f32e28"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.750347 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qd5p" event={"ID":"8c2755f3-fac4-4f0b-9afb-a449f1587d11","Type":"ContainerDied","Data":"189015c56b26a2946bc608b7b573f5ccb4f5e157b8c0ad9b525476261a7b20ac"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.750396 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="189015c56b26a2946bc608b7b573f5ccb4f5e157b8c0ad9b525476261a7b20ac" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.750459 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.751766 4985 generic.go:334] "Generic (PLEG): container finished" podID="75109476-5e36-45b8-afb9-1e7f3a9331f9" containerID="d9984694685d646182db409a296c9eb34220178e5fa3648431bc4bdbe12a9c45" exitCode=0 Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.751799 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l4q82" event={"ID":"75109476-5e36-45b8-afb9-1e7f3a9331f9","Type":"ContainerDied","Data":"d9984694685d646182db409a296c9eb34220178e5fa3648431bc4bdbe12a9c45"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.807844 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-swiftconf\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.808427 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hdhf\" (UniqueName: \"kubernetes.io/projected/c0714595-ac9e-4945-9250-6f499317070d-kube-api-access-9hdhf\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.808572 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0714595-ac9e-4945-9250-6f499317070d-etc-swift\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.808669 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-dispersionconf\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.808809 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-combined-ca-bundle\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.809809 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-ring-data-devices\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.810055 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-scripts\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.810668 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.811088 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0714595-ac9e-4945-9250-6f499317070d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.811413 4985 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.811505 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cbkz\" (UniqueName: \"kubernetes.io/projected/1a24a5c2-4c45-43dd-a957-253323fed4d5-kube-api-access-7cbkz\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.811577 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljjz4\" (UniqueName: \"kubernetes.io/projected/4adf60c6-4008-4f41-a60b-cf10db1657cf-kube-api-access-ljjz4\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.811651 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s5bb\" (UniqueName: \"kubernetes.io/projected/346cb311-0387-4c85-9827-e0091b1e6bcd-kube-api-access-2s5bb\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.822463 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0714595-ac9e-4945-9250-6f499317070d-kube-api-access-9hdhf" (OuterVolumeSpecName: "kube-api-access-9hdhf") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "kube-api-access-9hdhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.824509 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.830599 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-scripts" (OuterVolumeSpecName: "scripts") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.833163 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.839582 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.916130 4985 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.916439 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hdhf\" (UniqueName: \"kubernetes.io/projected/c0714595-ac9e-4945-9250-6f499317070d-kube-api-access-9hdhf\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.916527 4985 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0714595-ac9e-4945-9250-6f499317070d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.916602 4985 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.916676 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.916766 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.766457 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.767749 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6lq9x" event={"ID":"c0714595-ac9e-4945-9250-6f499317070d","Type":"ContainerDied","Data":"8984873f7fbeb5534245e789d9a64682aba9641126cebac96c088a070c8c95bb"} Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.767905 4985 scope.go:117] "RemoveContainer" containerID="00ae9927f05102567e126074090c38904675116334ef57365bcf6f128ff9bdcc" Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.768082 4985 generic.go:334] "Generic (PLEG): container finished" podID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerID="51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517" exitCode=0 Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.768200 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541","Type":"ContainerDied","Data":"51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517"} Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.906337 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6lq9x"] Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.914357 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-6lq9x"] Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.271575 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.288419 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0714595-ac9e-4945-9250-6f499317070d" path="/var/lib/kubelet/pods/c0714595-ac9e-4945-9250-6f499317070d/volumes" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.349742 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fgct\" (UniqueName: \"kubernetes.io/projected/9193a306-03fe-41ae-8b93-2851b08c73fb-kube-api-access-8fgct\") pod \"9193a306-03fe-41ae-8b93-2851b08c73fb\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.350064 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9193a306-03fe-41ae-8b93-2851b08c73fb-operator-scripts\") pod \"9193a306-03fe-41ae-8b93-2851b08c73fb\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.351525 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9193a306-03fe-41ae-8b93-2851b08c73fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9193a306-03fe-41ae-8b93-2851b08c73fb" (UID: "9193a306-03fe-41ae-8b93-2851b08c73fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.360198 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9193a306-03fe-41ae-8b93-2851b08c73fb-kube-api-access-8fgct" (OuterVolumeSpecName: "kube-api-access-8fgct") pod "9193a306-03fe-41ae-8b93-2851b08c73fb" (UID: "9193a306-03fe-41ae-8b93-2851b08c73fb"). InnerVolumeSpecName "kube-api-access-8fgct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.429855 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.437403 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.451954 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-ring-data-devices\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452085 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-swiftconf\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452334 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-dispersionconf\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452397 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-operator-scripts\") pod \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452430 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbrps\" (UniqueName: \"kubernetes.io/projected/75109476-5e36-45b8-afb9-1e7f3a9331f9-kube-api-access-rbrps\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452468 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-combined-ca-bundle\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452535 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-scripts\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452560 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/75109476-5e36-45b8-afb9-1e7f3a9331f9-etc-swift\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452654 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whr5p\" (UniqueName: \"kubernetes.io/projected/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-kube-api-access-whr5p\") pod \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452833 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.453430 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9193a306-03fe-41ae-8b93-2851b08c73fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.453449 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fgct\" (UniqueName: \"kubernetes.io/projected/9193a306-03fe-41ae-8b93-2851b08c73fb-kube-api-access-8fgct\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.453461 4985 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.483548 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-scripts" (OuterVolumeSpecName: "scripts") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.483977 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbefdfab-0ef2-4f71-9e9c-412c4dd87886" (UID: "dbefdfab-0ef2-4f71-9e9c-412c4dd87886"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.484545 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75109476-5e36-45b8-afb9-1e7f3a9331f9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.485317 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75109476-5e36-45b8-afb9-1e7f3a9331f9-kube-api-access-rbrps" (OuterVolumeSpecName: "kube-api-access-rbrps") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "kube-api-access-rbrps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.485702 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.485827 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-kube-api-access-whr5p" (OuterVolumeSpecName: "kube-api-access-whr5p") pod "dbefdfab-0ef2-4f71-9e9c-412c4dd87886" (UID: "dbefdfab-0ef2-4f71-9e9c-412c4dd87886"). InnerVolumeSpecName "kube-api-access-whr5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.517503 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.519918 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555269 4985 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555296 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555308 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbrps\" (UniqueName: \"kubernetes.io/projected/75109476-5e36-45b8-afb9-1e7f3a9331f9-kube-api-access-rbrps\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555317 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555326 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555334 4985 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/75109476-5e36-45b8-afb9-1e7f3a9331f9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555342 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whr5p\" (UniqueName: \"kubernetes.io/projected/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-kube-api-access-whr5p\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555351 4985 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.645321 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fm4x7"] Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.658832 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fm4x7"] Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.736658 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9sg6w"] Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739311 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9193a306-03fe-41ae-8b93-2851b08c73fb" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739356 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9193a306-03fe-41ae-8b93-2851b08c73fb" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739394 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0714595-ac9e-4945-9250-6f499317070d" containerName="swift-ring-rebalance" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739402 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0714595-ac9e-4945-9250-6f499317070d" containerName="swift-ring-rebalance" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739419 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a24a5c2-4c45-43dd-a957-253323fed4d5" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739426 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a24a5c2-4c45-43dd-a957-253323fed4d5" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739453 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75109476-5e36-45b8-afb9-1e7f3a9331f9" containerName="swift-ring-rebalance" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739459 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="75109476-5e36-45b8-afb9-1e7f3a9331f9" containerName="swift-ring-rebalance" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739476 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f068aa-ed0a-47e7-9f95-16f86bf91343" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739482 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f068aa-ed0a-47e7-9f95-16f86bf91343" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739504 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9900c5fe-8fec-452e-86cc-98d901c94329" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739511 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9900c5fe-8fec-452e-86cc-98d901c94329" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739527 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbefdfab-0ef2-4f71-9e9c-412c4dd87886" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739533 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbefdfab-0ef2-4f71-9e9c-412c4dd87886" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739547 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346cb311-0387-4c85-9827-e0091b1e6bcd" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739553 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="346cb311-0387-4c85-9827-e0091b1e6bcd" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739571 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2755f3-fac4-4f0b-9afb-a449f1587d11" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739576 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2755f3-fac4-4f0b-9afb-a449f1587d11" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739590 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4adf60c6-4008-4f41-a60b-cf10db1657cf" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739596 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4adf60c6-4008-4f41-a60b-cf10db1657cf" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739610 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6004532-b8ab-4b69-9907-e7bd26c6735a" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739618 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6004532-b8ab-4b69-9907-e7bd26c6735a" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740114 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbefdfab-0ef2-4f71-9e9c-412c4dd87886" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740138 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="12f068aa-ed0a-47e7-9f95-16f86bf91343" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740156 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="75109476-5e36-45b8-afb9-1e7f3a9331f9" containerName="swift-ring-rebalance" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740173 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a24a5c2-4c45-43dd-a957-253323fed4d5" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740181 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="346cb311-0387-4c85-9827-e0091b1e6bcd" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740203 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9193a306-03fe-41ae-8b93-2851b08c73fb" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740223 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0714595-ac9e-4945-9250-6f499317070d" containerName="swift-ring-rebalance" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740236 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4adf60c6-4008-4f41-a60b-cf10db1657cf" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740265 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9900c5fe-8fec-452e-86cc-98d901c94329" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740278 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6004532-b8ab-4b69-9907-e7bd26c6735a" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740295 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2755f3-fac4-4f0b-9afb-a449f1587d11" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.741551 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.747329 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.760133 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpg29\" (UniqueName: \"kubernetes.io/projected/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-kube-api-access-bpg29\") pod \"root-account-create-update-9sg6w\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.760186 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-operator-scripts\") pod \"root-account-create-update-9sg6w\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.783180 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9sg6w"] Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.833390 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.833389 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" event={"ID":"dbefdfab-0ef2-4f71-9e9c-412c4dd87886","Type":"ContainerDied","Data":"9e2efe46034044851f5a3e637e431cf9ea43affccfac6f4e797b1d360ae90de8"} Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.833507 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e2efe46034044851f5a3e637e431cf9ea43affccfac6f4e797b1d360ae90de8" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.836919 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.837460 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l4q82" event={"ID":"75109476-5e36-45b8-afb9-1e7f3a9331f9","Type":"ContainerDied","Data":"c1416088ef67bc8d80926482d433fdd2be41d91a244a0f52cf43dc4e1bdb2314"} Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.837526 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1416088ef67bc8d80926482d433fdd2be41d91a244a0f52cf43dc4e1bdb2314" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.848279 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541","Type":"ContainerStarted","Data":"ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d"} Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.848568 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.852420 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" event={"ID":"9193a306-03fe-41ae-8b93-2851b08c73fb","Type":"ContainerDied","Data":"bbbe3861e112c80337ea958edc9df2015e30e5d8f56b8fda15972e6b8bc59e33"} Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.852465 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbbe3861e112c80337ea958edc9df2015e30e5d8f56b8fda15972e6b8bc59e33" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.852475 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.862966 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpg29\" (UniqueName: \"kubernetes.io/projected/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-kube-api-access-bpg29\") pod \"root-account-create-update-9sg6w\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.863022 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-operator-scripts\") pod \"root-account-create-update-9sg6w\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.864816 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-operator-scripts\") pod \"root-account-create-update-9sg6w\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.866541 4985 generic.go:334] "Generic (PLEG): container finished" podID="313d3857-140a-4a66-8329-12453fc8dd4c" containerID="4546478e3b48ee65a1e4f5b248d4caed2739a0baae4f2cf1c67d5da021b79ce7" exitCode=0 Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.866594 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"313d3857-140a-4a66-8329-12453fc8dd4c","Type":"ContainerDied","Data":"4546478e3b48ee65a1e4f5b248d4caed2739a0baae4f2cf1c67d5da021b79ce7"} Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.880106 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.36363615 podStartE2EDuration="1m1.880083703s" podCreationTimestamp="2026-01-28 18:34:58 +0000 UTC" firstStartedPulling="2026-01-28 18:35:00.410925482 +0000 UTC m=+1311.237488303" lastFinishedPulling="2026-01-28 18:35:23.927373035 +0000 UTC m=+1334.753935856" observedRunningTime="2026-01-28 18:35:59.874627749 +0000 UTC m=+1370.701190570" watchObservedRunningTime="2026-01-28 18:35:59.880083703 +0000 UTC m=+1370.706646524" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.882757 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpg29\" (UniqueName: \"kubernetes.io/projected/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-kube-api-access-bpg29\") pod \"root-account-create-update-9sg6w\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.923620 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9sg6w" Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.402749 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9sg6w"] Jan 28 18:36:00 crc kubenswrapper[4985]: W0128 18:36:00.405697 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdbd403f_b5d7_4aba_9ee6_bcbbd933b212.slice/crio-82fa02a88ee932db1116b49896d85803a5d7bac9ce45f395758ed51aa02e8c00 WatchSource:0}: Error finding container 82fa02a88ee932db1116b49896d85803a5d7bac9ce45f395758ed51aa02e8c00: Status 404 returned error can't find the container with id 82fa02a88ee932db1116b49896d85803a5d7bac9ce45f395758ed51aa02e8c00 Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.881656 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"313d3857-140a-4a66-8329-12453fc8dd4c","Type":"ContainerStarted","Data":"40373a1abb092cff6ca0fd81aa96440eb2bcdae3ad3cb420a1cbe1ebb7f76247"} Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.882974 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.887332 4985 generic.go:334] "Generic (PLEG): container finished" podID="9549037f-5867-44ac-86dc-a02105e4c414" containerID="bb84d317406cd6ce8331d52ba3971c969e272858edb60fe48bf5c6408f6194f8" exitCode=0 Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.887427 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9549037f-5867-44ac-86dc-a02105e4c414","Type":"ContainerDied","Data":"bb84d317406cd6ce8331d52ba3971c969e272858edb60fe48bf5c6408f6194f8"} Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.889755 4985 generic.go:334] "Generic (PLEG): container finished" podID="cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" containerID="448c9182ae2c3757a2a9e99f29042394c97a623fe1975f8bf4c1b669c7542ca8" exitCode=0 Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.889836 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9sg6w" event={"ID":"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212","Type":"ContainerDied","Data":"448c9182ae2c3757a2a9e99f29042394c97a623fe1975f8bf4c1b669c7542ca8"} Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.889873 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9sg6w" event={"ID":"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212","Type":"ContainerStarted","Data":"82fa02a88ee932db1116b49896d85803a5d7bac9ce45f395758ed51aa02e8c00"} Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.891650 4985 generic.go:334] "Generic (PLEG): container finished" podID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerID="dfcb150ccda2aa4d1050a6d900540fe9f90c22d4f5256e19b6eeee11fa6e624a" exitCode=0 Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.891779 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"41c1858c-ad6e-441f-b998-c57290cc5d68","Type":"ContainerDied","Data":"dfcb150ccda2aa4d1050a6d900540fe9f90c22d4f5256e19b6eeee11fa6e624a"} Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.908097 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=39.053814145 podStartE2EDuration="1m2.908072935s" podCreationTimestamp="2026-01-28 18:34:58 +0000 UTC" firstStartedPulling="2026-01-28 18:35:00.668561816 +0000 UTC m=+1311.495124637" lastFinishedPulling="2026-01-28 18:35:24.522820606 +0000 UTC m=+1335.349383427" observedRunningTime="2026-01-28 18:36:00.906682606 +0000 UTC m=+1371.733245447" watchObservedRunningTime="2026-01-28 18:36:00.908072935 +0000 UTC m=+1371.734635756" Jan 28 18:36:01 crc kubenswrapper[4985]: I0128 18:36:01.274943 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12f068aa-ed0a-47e7-9f95-16f86bf91343" path="/var/lib/kubelet/pods/12f068aa-ed0a-47e7-9f95-16f86bf91343/volumes" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.228496 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5q5qm"] Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.231701 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.235107 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.235306 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jbtcd" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.244916 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5q5qm"] Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.351829 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-db-sync-config-data\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.351982 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drvrl\" (UniqueName: \"kubernetes.io/projected/229b9159-df89-4859-b5f3-d34b2759d0fd-kube-api-access-drvrl\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.352041 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-config-data\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.352284 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-combined-ca-bundle\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.454698 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-combined-ca-bundle\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.455123 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-db-sync-config-data\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.455223 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drvrl\" (UniqueName: \"kubernetes.io/projected/229b9159-df89-4859-b5f3-d34b2759d0fd-kube-api-access-drvrl\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.455312 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-config-data\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.458853 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-combined-ca-bundle\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.459985 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-db-sync-config-data\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.460124 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-config-data\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.472165 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drvrl\" (UniqueName: \"kubernetes.io/projected/229b9159-df89-4859-b5f3-d34b2759d0fd-kube-api-access-drvrl\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.587804 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9sg6w" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.598946 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.658556 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpg29\" (UniqueName: \"kubernetes.io/projected/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-kube-api-access-bpg29\") pod \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.659013 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-operator-scripts\") pod \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.661782 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" (UID: "cdbd403f-b5d7-4aba-9ee6-bcbbd933b212"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.676660 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-kube-api-access-bpg29" (OuterVolumeSpecName: "kube-api-access-bpg29") pod "cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" (UID: "cdbd403f-b5d7-4aba-9ee6-bcbbd933b212"). InnerVolumeSpecName "kube-api-access-bpg29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.761822 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpg29\" (UniqueName: \"kubernetes.io/projected/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-kube-api-access-bpg29\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.761851 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.920841 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9549037f-5867-44ac-86dc-a02105e4c414","Type":"ContainerStarted","Data":"1d8b169a7d964359c8bd6733d67d45546c1c642e159163c5b350061cce51fd25"} Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.921195 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.924280 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9sg6w" event={"ID":"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212","Type":"ContainerDied","Data":"82fa02a88ee932db1116b49896d85803a5d7bac9ce45f395758ed51aa02e8c00"} Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.924334 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82fa02a88ee932db1116b49896d85803a5d7bac9ce45f395758ed51aa02e8c00" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.924462 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9sg6w" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.926438 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"41c1858c-ad6e-441f-b998-c57290cc5d68","Type":"ContainerStarted","Data":"aca2d63153078144b7f42a325b0b7ca02eb87cda15e02f68bf7871b8a8ca688c"} Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.927296 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.956003 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=42.843467075 podStartE2EDuration="1m5.955984535s" podCreationTimestamp="2026-01-28 18:34:58 +0000 UTC" firstStartedPulling="2026-01-28 18:35:00.622929407 +0000 UTC m=+1311.449492228" lastFinishedPulling="2026-01-28 18:35:23.735446867 +0000 UTC m=+1334.562009688" observedRunningTime="2026-01-28 18:36:03.949192213 +0000 UTC m=+1374.775755034" watchObservedRunningTime="2026-01-28 18:36:03.955984535 +0000 UTC m=+1374.782547356" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.985148 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.365691017 podStartE2EDuration="1m5.985116617s" podCreationTimestamp="2026-01-28 18:34:58 +0000 UTC" firstStartedPulling="2026-01-28 18:35:00.816661137 +0000 UTC m=+1311.643223948" lastFinishedPulling="2026-01-28 18:35:24.436086717 +0000 UTC m=+1335.262649548" observedRunningTime="2026-01-28 18:36:03.973555621 +0000 UTC m=+1374.800118442" watchObservedRunningTime="2026-01-28 18:36:03.985116617 +0000 UTC m=+1374.811679438" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.299834 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2"] Jan 28 18:36:05 crc kubenswrapper[4985]: E0128 18:36:05.300721 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" containerName="mariadb-account-create-update" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.300740 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" containerName="mariadb-account-create-update" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.300994 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" containerName="mariadb-account-create-update" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.306463 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.323732 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2"] Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.414052 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fvvh2\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.414334 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8qds\" (UniqueName: \"kubernetes.io/projected/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-kube-api-access-n8qds\") pod \"mysqld-exporter-openstack-cell1-db-create-fvvh2\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.428775 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-ba0b-account-create-update-56qr8"] Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.430510 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.433440 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.442360 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-ba0b-account-create-update-56qr8"] Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.517641 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f6fb79-54ff-4a24-ad53-5812b6faa504-operator-scripts\") pod \"mysqld-exporter-ba0b-account-create-update-56qr8\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.517800 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fvvh2\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.517862 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwpxx\" (UniqueName: \"kubernetes.io/projected/53f6fb79-54ff-4a24-ad53-5812b6faa504-kube-api-access-cwpxx\") pod \"mysqld-exporter-ba0b-account-create-update-56qr8\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.518128 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8qds\" (UniqueName: \"kubernetes.io/projected/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-kube-api-access-n8qds\") pod \"mysqld-exporter-openstack-cell1-db-create-fvvh2\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.520334 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fvvh2\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.543145 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8qds\" (UniqueName: \"kubernetes.io/projected/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-kube-api-access-n8qds\") pod \"mysqld-exporter-openstack-cell1-db-create-fvvh2\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.619903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f6fb79-54ff-4a24-ad53-5812b6faa504-operator-scripts\") pod \"mysqld-exporter-ba0b-account-create-update-56qr8\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.620018 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwpxx\" (UniqueName: \"kubernetes.io/projected/53f6fb79-54ff-4a24-ad53-5812b6faa504-kube-api-access-cwpxx\") pod \"mysqld-exporter-ba0b-account-create-update-56qr8\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.620665 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f6fb79-54ff-4a24-ad53-5812b6faa504-operator-scripts\") pod \"mysqld-exporter-ba0b-account-create-update-56qr8\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.627012 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.644956 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwpxx\" (UniqueName: \"kubernetes.io/projected/53f6fb79-54ff-4a24-ad53-5812b6faa504-kube-api-access-cwpxx\") pod \"mysqld-exporter-ba0b-account-create-update-56qr8\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.946768 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.947300 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerStarted","Data":"e1a1c6117167cd879db9ae5539bf348a54302f9007388acd00fd5041acda647f"} Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.032434 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5q5qm"] Jan 28 18:36:06 crc kubenswrapper[4985]: W0128 18:36:06.037398 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod229b9159_df89_4859_b5f3_d34b2759d0fd.slice/crio-08b2b218ba733f91c11c5e317ad93617dac7e3c043b5d4fce759166ed128bc09 WatchSource:0}: Error finding container 08b2b218ba733f91c11c5e317ad93617dac7e3c043b5d4fce759166ed128bc09: Status 404 returned error can't find the container with id 08b2b218ba733f91c11c5e317ad93617dac7e3c043b5d4fce759166ed128bc09 Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.193890 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2"] Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.249058 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.552651 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-ba0b-account-create-update-56qr8"] Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.960778 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" event={"ID":"53f6fb79-54ff-4a24-ad53-5812b6faa504","Type":"ContainerStarted","Data":"1f111c090d549d68eb9c893a3868b82edfed972f352a2924277825559a933734"} Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.961038 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" event={"ID":"53f6fb79-54ff-4a24-ad53-5812b6faa504","Type":"ContainerStarted","Data":"56df849bc6eab86fbb2f1c43e6b3abacfd8cf4d3de99598c0ea16866523869b5"} Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.962466 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5q5qm" event={"ID":"229b9159-df89-4859-b5f3-d34b2759d0fd","Type":"ContainerStarted","Data":"08b2b218ba733f91c11c5e317ad93617dac7e3c043b5d4fce759166ed128bc09"} Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.968782 4985 generic.go:334] "Generic (PLEG): container finished" podID="8c57cd6d-54d8-4d7c-863c-cfd30fab0768" containerID="b2b6ff931f4d8121ddd40be80d57520170cc490944b52533c2717e3ed1e070dd" exitCode=0 Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.968813 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" event={"ID":"8c57cd6d-54d8-4d7c-863c-cfd30fab0768","Type":"ContainerDied","Data":"b2b6ff931f4d8121ddd40be80d57520170cc490944b52533c2717e3ed1e070dd"} Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.968832 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" event={"ID":"8c57cd6d-54d8-4d7c-863c-cfd30fab0768","Type":"ContainerStarted","Data":"671eee055a071bc8d961556a69fcdf932528e6509edb02442e024c4f35917d09"} Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.986107 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" podStartSLOduration=1.986086521 podStartE2EDuration="1.986086521s" podCreationTimestamp="2026-01-28 18:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:06.978078535 +0000 UTC m=+1377.804641376" watchObservedRunningTime="2026-01-28 18:36:06.986086521 +0000 UTC m=+1377.812649342" Jan 28 18:36:07 crc kubenswrapper[4985]: I0128 18:36:07.982028 4985 generic.go:334] "Generic (PLEG): container finished" podID="53f6fb79-54ff-4a24-ad53-5812b6faa504" containerID="1f111c090d549d68eb9c893a3868b82edfed972f352a2924277825559a933734" exitCode=0 Jan 28 18:36:07 crc kubenswrapper[4985]: I0128 18:36:07.982128 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" event={"ID":"53f6fb79-54ff-4a24-ad53-5812b6faa504","Type":"ContainerDied","Data":"1f111c090d549d68eb9c893a3868b82edfed972f352a2924277825559a933734"} Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.401182 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.493370 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-operator-scripts\") pod \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.493883 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c57cd6d-54d8-4d7c-863c-cfd30fab0768" (UID: "8c57cd6d-54d8-4d7c-863c-cfd30fab0768"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.494313 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8qds\" (UniqueName: \"kubernetes.io/projected/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-kube-api-access-n8qds\") pod \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.495873 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.501030 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-kube-api-access-n8qds" (OuterVolumeSpecName: "kube-api-access-n8qds") pod "8c57cd6d-54d8-4d7c-863c-cfd30fab0768" (UID: "8c57cd6d-54d8-4d7c-863c-cfd30fab0768"). InnerVolumeSpecName "kube-api-access-n8qds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.597596 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8qds\" (UniqueName: \"kubernetes.io/projected/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-kube-api-access-n8qds\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.995673 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerStarted","Data":"d6979a9489721d74b8d4664bdfe5df656096756724de155696b85d31e7a0e2dd"} Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.998550 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" event={"ID":"8c57cd6d-54d8-4d7c-863c-cfd30fab0768","Type":"ContainerDied","Data":"671eee055a071bc8d961556a69fcdf932528e6509edb02442e024c4f35917d09"} Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.998576 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="671eee055a071bc8d961556a69fcdf932528e6509edb02442e024c4f35917d09" Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.998583 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.209907 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.215863 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.411510 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.508957 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.619274 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f6fb79-54ff-4a24-ad53-5812b6faa504-operator-scripts\") pod \"53f6fb79-54ff-4a24-ad53-5812b6faa504\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.619475 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwpxx\" (UniqueName: \"kubernetes.io/projected/53f6fb79-54ff-4a24-ad53-5812b6faa504-kube-api-access-cwpxx\") pod \"53f6fb79-54ff-4a24-ad53-5812b6faa504\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.620078 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53f6fb79-54ff-4a24-ad53-5812b6faa504-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53f6fb79-54ff-4a24-ad53-5812b6faa504" (UID: "53f6fb79-54ff-4a24-ad53-5812b6faa504"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.626666 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f6fb79-54ff-4a24-ad53-5812b6faa504-kube-api-access-cwpxx" (OuterVolumeSpecName: "kube-api-access-cwpxx") pod "53f6fb79-54ff-4a24-ad53-5812b6faa504" (UID: "53f6fb79-54ff-4a24-ad53-5812b6faa504"). InnerVolumeSpecName "kube-api-access-cwpxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.721453 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f6fb79-54ff-4a24-ad53-5812b6faa504-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.721496 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwpxx\" (UniqueName: \"kubernetes.io/projected/53f6fb79-54ff-4a24-ad53-5812b6faa504-kube-api-access-cwpxx\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.836101 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.009850 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" event={"ID":"53f6fb79-54ff-4a24-ad53-5812b6faa504","Type":"ContainerDied","Data":"56df849bc6eab86fbb2f1c43e6b3abacfd8cf4d3de99598c0ea16866523869b5"} Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.009890 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56df849bc6eab86fbb2f1c43e6b3abacfd8cf4d3de99598c0ea16866523869b5" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.009932 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.033711 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.654281 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:36:10 crc kubenswrapper[4985]: E0128 18:36:10.655532 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c57cd6d-54d8-4d7c-863c-cfd30fab0768" containerName="mariadb-database-create" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.655568 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c57cd6d-54d8-4d7c-863c-cfd30fab0768" containerName="mariadb-database-create" Jan 28 18:36:10 crc kubenswrapper[4985]: E0128 18:36:10.655630 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f6fb79-54ff-4a24-ad53-5812b6faa504" containerName="mariadb-account-create-update" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.655639 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f6fb79-54ff-4a24-ad53-5812b6faa504" containerName="mariadb-account-create-update" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.656182 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c57cd6d-54d8-4d7c-863c-cfd30fab0768" containerName="mariadb-database-create" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.656217 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f6fb79-54ff-4a24-ad53-5812b6faa504" containerName="mariadb-account-create-update" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.673458 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.676835 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.708027 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.769156 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.769303 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8sjf\" (UniqueName: \"kubernetes.io/projected/558a195a-5deb-441a-9eeb-9e506f49597e-kube-api-access-q8sjf\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.769340 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-config-data\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.871985 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8sjf\" (UniqueName: \"kubernetes.io/projected/558a195a-5deb-441a-9eeb-9e506f49597e-kube-api-access-q8sjf\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.872068 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-config-data\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.872214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.879237 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.894710 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-config-data\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.896530 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8sjf\" (UniqueName: \"kubernetes.io/projected/558a195a-5deb-441a-9eeb-9e506f49597e-kube-api-access-q8sjf\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.003738 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.021208 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"58e488e3d5fd637191d4b86c732b0fb14d5b332b19c89bed60cee07e1e816c5f"} Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.186213 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.186785 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.186869 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.187905 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.187965 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a" gracePeriod=600 Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.507370 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:36:11 crc kubenswrapper[4985]: W0128 18:36:11.519795 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod558a195a_5deb_441a_9eeb_9e506f49597e.slice/crio-85458b6f5d810a7b499082f7190c9ac8b481800a9c019fc526f3a7b1b018b583 WatchSource:0}: Error finding container 85458b6f5d810a7b499082f7190c9ac8b481800a9c019fc526f3a7b1b018b583: Status 404 returned error can't find the container with id 85458b6f5d810a7b499082f7190c9ac8b481800a9c019fc526f3a7b1b018b583 Jan 28 18:36:12 crc kubenswrapper[4985]: I0128 18:36:12.039710 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a" exitCode=0 Jan 28 18:36:12 crc kubenswrapper[4985]: I0128 18:36:12.039778 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a"} Jan 28 18:36:12 crc kubenswrapper[4985]: I0128 18:36:12.039834 4985 scope.go:117] "RemoveContainer" containerID="68c147e3d0c646190ed92593bf974e9555950a450b92447009beba1ebe5c7093" Jan 28 18:36:12 crc kubenswrapper[4985]: I0128 18:36:12.041656 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"558a195a-5deb-441a-9eeb-9e506f49597e","Type":"ContainerStarted","Data":"85458b6f5d810a7b499082f7190c9ac8b481800a9c019fc526f3a7b1b018b583"} Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.069005 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108"} Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.263177 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9r84t" podUID="2d1c1ab5-7e43-47cd-8218-3d945574a79c" containerName="ovn-controller" probeResult="failure" output=< Jan 28 18:36:13 crc kubenswrapper[4985]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 18:36:13 crc kubenswrapper[4985]: > Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.330634 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.334906 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.575876 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9r84t-config-w57rc"] Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.577296 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.579687 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.591661 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9r84t-config-w57rc"] Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.666026 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-log-ovn\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.666118 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run-ovn\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.666170 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-additional-scripts\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.666397 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-scripts\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.666751 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.666828 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk44x\" (UniqueName: \"kubernetes.io/projected/3aa41169-20ef-41dd-a534-929618c93ecf-kube-api-access-fk44x\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770157 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770263 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk44x\" (UniqueName: \"kubernetes.io/projected/3aa41169-20ef-41dd-a534-929618c93ecf-kube-api-access-fk44x\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-log-ovn\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770398 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run-ovn\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770451 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-additional-scripts\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770484 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-scripts\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770536 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770605 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-log-ovn\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770611 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run-ovn\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.772861 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-scripts\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.773156 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-additional-scripts\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.809994 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk44x\" (UniqueName: \"kubernetes.io/projected/3aa41169-20ef-41dd-a534-929618c93ecf-kube-api-access-fk44x\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.895779 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:18 crc kubenswrapper[4985]: I0128 18:36:18.218130 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9r84t" podUID="2d1c1ab5-7e43-47cd-8218-3d945574a79c" containerName="ovn-controller" probeResult="failure" output=< Jan 28 18:36:18 crc kubenswrapper[4985]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 18:36:18 crc kubenswrapper[4985]: > Jan 28 18:36:19 crc kubenswrapper[4985]: I0128 18:36:19.834866 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 28 18:36:19 crc kubenswrapper[4985]: I0128 18:36:19.863789 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 28 18:36:19 crc kubenswrapper[4985]: I0128 18:36:19.876313 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 28 18:36:19 crc kubenswrapper[4985]: I0128 18:36:19.980567 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:36:20 crc kubenswrapper[4985]: E0128 18:36:20.180953 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:20 crc kubenswrapper[4985]: E0128 18:36:20.182100 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:22 crc kubenswrapper[4985]: E0128 18:36:22.172854 4985 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.195:44796->38.102.83.195:43365: write tcp 38.102.83.195:44796->38.102.83.195:43365: write: connection reset by peer Jan 28 18:36:23 crc kubenswrapper[4985]: I0128 18:36:23.230454 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9r84t" podUID="2d1c1ab5-7e43-47cd-8218-3d945574a79c" containerName="ovn-controller" probeResult="failure" output=< Jan 28 18:36:23 crc kubenswrapper[4985]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 18:36:23 crc kubenswrapper[4985]: > Jan 28 18:36:24 crc kubenswrapper[4985]: E0128 18:36:24.748237 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Jan 28 18:36:24 crc kubenswrapper[4985]: E0128 18:36:24.748795 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gv7d7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(96162e6f-966d-438d-9362-ef03abc4b277): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:36:24 crc kubenswrapper[4985]: E0128 18:36:24.750206 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="96162e6f-966d-438d-9362-ef03abc4b277" Jan 28 18:36:25 crc kubenswrapper[4985]: E0128 18:36:25.216262 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="96162e6f-966d-438d-9362-ef03abc4b277" Jan 28 18:36:25 crc kubenswrapper[4985]: I0128 18:36:25.713658 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9r84t-config-w57rc"] Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.236975 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9r84t-config-w57rc" event={"ID":"3aa41169-20ef-41dd-a534-929618c93ecf","Type":"ContainerStarted","Data":"00c5bac74e2813b5c78c4d3d883b158530767718be83285d64f4742a35e64806"} Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.238806 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9r84t-config-w57rc" event={"ID":"3aa41169-20ef-41dd-a534-929618c93ecf","Type":"ContainerStarted","Data":"4304837e07a0d35b09132d3b8151c66561b180c75ce1afd3868d65e25580b626"} Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.240463 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"558a195a-5deb-441a-9eeb-9e506f49597e","Type":"ContainerStarted","Data":"fb245cebe475dc743941a7a591f70b9acf915655a7047e5c0f3798d225e1d296"} Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.243878 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"c3ec7d3fe0003c26958c7864faa954b76fb034fc6cf4e9cb82bb3285bbd8166b"} Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.244026 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"6b257804b520f072ee726aff4dbcbcf2026530dc7877d9752f22ff8244f8ff71"} Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.267240 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9r84t-config-w57rc" podStartSLOduration=13.267223453 podStartE2EDuration="13.267223453s" podCreationTimestamp="2026-01-28 18:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:26.259086853 +0000 UTC m=+1397.085649684" watchObservedRunningTime="2026-01-28 18:36:26.267223453 +0000 UTC m=+1397.093786284" Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.283779 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.008000221 podStartE2EDuration="16.283755949s" podCreationTimestamp="2026-01-28 18:36:10 +0000 UTC" firstStartedPulling="2026-01-28 18:36:11.522076763 +0000 UTC m=+1382.348639584" lastFinishedPulling="2026-01-28 18:36:25.797832491 +0000 UTC m=+1396.624395312" observedRunningTime="2026-01-28 18:36:26.276079573 +0000 UTC m=+1397.102642414" watchObservedRunningTime="2026-01-28 18:36:26.283755949 +0000 UTC m=+1397.110318770" Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.256130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5q5qm" event={"ID":"229b9159-df89-4859-b5f3-d34b2759d0fd","Type":"ContainerStarted","Data":"8d83ae610aea076db41903e479372673c489635bc359f8ba503ad92865568b4d"} Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.262912 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"8dbad6fa2c438cc753b49e19a89b77bbaf282f34ff8f978e465f45a415960ca5"} Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.262940 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"04c96766cb4d8a87148edb5b1ddcfd2b3727e7bdb901b73bfa11bcf50a0f983d"} Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.268471 4985 generic.go:334] "Generic (PLEG): container finished" podID="3aa41169-20ef-41dd-a534-929618c93ecf" containerID="00c5bac74e2813b5c78c4d3d883b158530767718be83285d64f4742a35e64806" exitCode=0 Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.290408 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5q5qm" podStartSLOduration=4.660679958 podStartE2EDuration="24.290379789s" podCreationTimestamp="2026-01-28 18:36:03 +0000 UTC" firstStartedPulling="2026-01-28 18:36:06.041204216 +0000 UTC m=+1376.867767037" lastFinishedPulling="2026-01-28 18:36:25.670904047 +0000 UTC m=+1396.497466868" observedRunningTime="2026-01-28 18:36:27.280332245 +0000 UTC m=+1398.106895116" watchObservedRunningTime="2026-01-28 18:36:27.290379789 +0000 UTC m=+1398.116942620" Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.291073 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9r84t-config-w57rc" event={"ID":"3aa41169-20ef-41dd-a534-929618c93ecf","Type":"ContainerDied","Data":"00c5bac74e2813b5c78c4d3d883b158530767718be83285d64f4742a35e64806"} Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.320674 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:27 crc kubenswrapper[4985]: E0128 18:36:27.323978 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="96162e6f-966d-438d-9362-ef03abc4b277" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.241422 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-9r84t" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.285197 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"076d26c8df1b7770317a62e3822c0b7e7c64be3f432b53e1acb7682dcd2cceca"} Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.766964 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939005 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk44x\" (UniqueName: \"kubernetes.io/projected/3aa41169-20ef-41dd-a534-929618c93ecf-kube-api-access-fk44x\") pod \"3aa41169-20ef-41dd-a534-929618c93ecf\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939724 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-log-ovn\") pod \"3aa41169-20ef-41dd-a534-929618c93ecf\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939785 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-additional-scripts\") pod \"3aa41169-20ef-41dd-a534-929618c93ecf\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939884 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-scripts\") pod \"3aa41169-20ef-41dd-a534-929618c93ecf\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939881 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3aa41169-20ef-41dd-a534-929618c93ecf" (UID: "3aa41169-20ef-41dd-a534-929618c93ecf"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939909 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run\") pod \"3aa41169-20ef-41dd-a534-929618c93ecf\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939969 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run" (OuterVolumeSpecName: "var-run") pod "3aa41169-20ef-41dd-a534-929618c93ecf" (UID: "3aa41169-20ef-41dd-a534-929618c93ecf"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.940085 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run-ovn\") pod \"3aa41169-20ef-41dd-a534-929618c93ecf\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.940394 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3aa41169-20ef-41dd-a534-929618c93ecf" (UID: "3aa41169-20ef-41dd-a534-929618c93ecf"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.941072 4985 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.941270 4985 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.941117 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3aa41169-20ef-41dd-a534-929618c93ecf" (UID: "3aa41169-20ef-41dd-a534-929618c93ecf"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.941345 4985 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.941697 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-scripts" (OuterVolumeSpecName: "scripts") pod "3aa41169-20ef-41dd-a534-929618c93ecf" (UID: "3aa41169-20ef-41dd-a534-929618c93ecf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.944935 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa41169-20ef-41dd-a534-929618c93ecf-kube-api-access-fk44x" (OuterVolumeSpecName: "kube-api-access-fk44x") pod "3aa41169-20ef-41dd-a534-929618c93ecf" (UID: "3aa41169-20ef-41dd-a534-929618c93ecf"). InnerVolumeSpecName "kube-api-access-fk44x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.045878 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk44x\" (UniqueName: \"kubernetes.io/projected/3aa41169-20ef-41dd-a534-929618c93ecf-kube-api-access-fk44x\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.046027 4985 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.046100 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.309795 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"1575f9da4f7494ff2e663abc8f87f3ad4b9b386bc83e6473f8c00a9cd27df0ea"} Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.310958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"fed3b390b9c40225e985f6c2393c1d7a2a36e9df0162c3b8c0adf2a9c7e328b7"} Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.311047 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"27712d7d1daf801f78a0b80b4bbdd672994f4e9e9365e368d71d8b5b7c9ef2d1"} Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.315092 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9r84t-config-w57rc" event={"ID":"3aa41169-20ef-41dd-a534-929618c93ecf","Type":"ContainerDied","Data":"4304837e07a0d35b09132d3b8151c66561b180c75ce1afd3868d65e25580b626"} Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.315272 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4304837e07a0d35b09132d3b8151c66561b180c75ce1afd3868d65e25580b626" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.315431 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.836431 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.866404 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.880431 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.927112 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9r84t-config-w57rc"] Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.975045 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9r84t-config-w57rc"] Jan 28 18:36:30 crc kubenswrapper[4985]: E0128 18:36:30.427740 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:31 crc kubenswrapper[4985]: I0128 18:36:31.283020 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aa41169-20ef-41dd-a534-929618c93ecf" path="/var/lib/kubelet/pods/3aa41169-20ef-41dd-a534-929618c93ecf/volumes" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.003123 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-888tv"] Jan 28 18:36:32 crc kubenswrapper[4985]: E0128 18:36:32.004229 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa41169-20ef-41dd-a534-929618c93ecf" containerName="ovn-config" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.004268 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa41169-20ef-41dd-a534-929618c93ecf" containerName="ovn-config" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.004513 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa41169-20ef-41dd-a534-929618c93ecf" containerName="ovn-config" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.005405 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.018953 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-888tv"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.113059 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-4fswm"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.114868 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.132354 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2623-account-create-update-nvftp"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.134229 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.138571 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.140308 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nh2f\" (UniqueName: \"kubernetes.io/projected/0a7822ab-0225-4deb-a283-374e32bc995f-kube-api-access-9nh2f\") pod \"cinder-db-create-888tv\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.140405 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7822ab-0225-4deb-a283-374e32bc995f-operator-scripts\") pod \"cinder-db-create-888tv\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.146535 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4fswm"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.219029 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2623-account-create-update-nvftp"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.243954 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8d89-account-create-update-8fw8c"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.245559 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.246526 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d078ca4-34dd-4a65-a2e4-ffc23f098285-operator-scripts\") pod \"barbican-db-create-4fswm\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.247140 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768c2a33-259c-4194-ad30-8edffff92f18-operator-scripts\") pod \"cinder-2623-account-create-update-nvftp\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.247342 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sznc6\" (UniqueName: \"kubernetes.io/projected/6d078ca4-34dd-4a65-a2e4-ffc23f098285-kube-api-access-sznc6\") pod \"barbican-db-create-4fswm\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.247391 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nh2f\" (UniqueName: \"kubernetes.io/projected/0a7822ab-0225-4deb-a283-374e32bc995f-kube-api-access-9nh2f\") pod \"cinder-db-create-888tv\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.247409 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bv6n\" (UniqueName: \"kubernetes.io/projected/768c2a33-259c-4194-ad30-8edffff92f18-kube-api-access-7bv6n\") pod \"cinder-2623-account-create-update-nvftp\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.247478 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7822ab-0225-4deb-a283-374e32bc995f-operator-scripts\") pod \"cinder-db-create-888tv\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.248096 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7822ab-0225-4deb-a283-374e32bc995f-operator-scripts\") pod \"cinder-db-create-888tv\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.250008 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.256887 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8d89-account-create-update-8fw8c"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.275716 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nh2f\" (UniqueName: \"kubernetes.io/projected/0a7822ab-0225-4deb-a283-374e32bc995f-kube-api-access-9nh2f\") pod \"cinder-db-create-888tv\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.323166 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.344800 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-5stnz"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.348485 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.351278 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sznc6\" (UniqueName: \"kubernetes.io/projected/6d078ca4-34dd-4a65-a2e4-ffc23f098285-kube-api-access-sznc6\") pod \"barbican-db-create-4fswm\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.351399 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bv6n\" (UniqueName: \"kubernetes.io/projected/768c2a33-259c-4194-ad30-8edffff92f18-kube-api-access-7bv6n\") pod \"cinder-2623-account-create-update-nvftp\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.351767 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d078ca4-34dd-4a65-a2e4-ffc23f098285-operator-scripts\") pod \"barbican-db-create-4fswm\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.352746 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768c2a33-259c-4194-ad30-8edffff92f18-operator-scripts\") pod \"cinder-2623-account-create-update-nvftp\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.353733 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768c2a33-259c-4194-ad30-8edffff92f18-operator-scripts\") pod \"cinder-2623-account-create-update-nvftp\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.353660 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d078ca4-34dd-4a65-a2e4-ffc23f098285-operator-scripts\") pod \"barbican-db-create-4fswm\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.361749 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-49fs2"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.369003 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.373664 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.373895 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.373906 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.374705 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g7p4d" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.376838 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5stnz"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.378665 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bv6n\" (UniqueName: \"kubernetes.io/projected/768c2a33-259c-4194-ad30-8edffff92f18-kube-api-access-7bv6n\") pod \"cinder-2623-account-create-update-nvftp\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.380737 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sznc6\" (UniqueName: \"kubernetes.io/projected/6d078ca4-34dd-4a65-a2e4-ffc23f098285-kube-api-access-sznc6\") pod \"barbican-db-create-4fswm\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.400091 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"42677b6b45768a4e26c82339836f4a6db3c2dedb5d1ffef03d828c3bd95e3e76"} Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.400134 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"1b5a405cd605ca085e8584ec02e29d6e26dde2f6f00eb347f3a66f2f2443b2f2"} Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.400144 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"fb920d8e8896d7004cd6fa0213cefc59b68255aacd2a26e34a6588f3e7ed5920"} Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.401792 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-49fs2"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.417307 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-4d8b-account-create-update-hg9ms"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.419569 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.423399 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.430929 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-4d8b-account-create-update-hg9ms"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.454803 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887f886a-9541-4075-9d32-0d8feaf32722-operator-scripts\") pod \"heat-4d8b-account-create-update-hg9ms\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.454861 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-config-data\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.454890 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdbg2\" (UniqueName: \"kubernetes.io/projected/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-kube-api-access-pdbg2\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.454908 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7074267-6514-4b90-9aef-a4df05b52054-operator-scripts\") pod \"heat-db-create-5stnz\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.454934 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbjjr\" (UniqueName: \"kubernetes.io/projected/887f886a-9541-4075-9d32-0d8feaf32722-kube-api-access-cbjjr\") pod \"heat-4d8b-account-create-update-hg9ms\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.455023 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd4g9\" (UniqueName: \"kubernetes.io/projected/c052fbc1-a102-456b-8658-c954fe91534b-kube-api-access-sd4g9\") pod \"barbican-8d89-account-create-update-8fw8c\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.455062 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-combined-ca-bundle\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.455083 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c052fbc1-a102-456b-8658-c954fe91534b-operator-scripts\") pod \"barbican-8d89-account-create-update-8fw8c\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.455106 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6gwr\" (UniqueName: \"kubernetes.io/projected/d7074267-6514-4b90-9aef-a4df05b52054-kube-api-access-f6gwr\") pod \"heat-db-create-5stnz\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.504269 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.543028 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568350 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-combined-ca-bundle\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568448 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c052fbc1-a102-456b-8658-c954fe91534b-operator-scripts\") pod \"barbican-8d89-account-create-update-8fw8c\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568496 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6gwr\" (UniqueName: \"kubernetes.io/projected/d7074267-6514-4b90-9aef-a4df05b52054-kube-api-access-f6gwr\") pod \"heat-db-create-5stnz\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568723 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887f886a-9541-4075-9d32-0d8feaf32722-operator-scripts\") pod \"heat-4d8b-account-create-update-hg9ms\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568801 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-config-data\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568885 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdbg2\" (UniqueName: \"kubernetes.io/projected/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-kube-api-access-pdbg2\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568920 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7074267-6514-4b90-9aef-a4df05b52054-operator-scripts\") pod \"heat-db-create-5stnz\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568990 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbjjr\" (UniqueName: \"kubernetes.io/projected/887f886a-9541-4075-9d32-0d8feaf32722-kube-api-access-cbjjr\") pod \"heat-4d8b-account-create-update-hg9ms\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.569182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd4g9\" (UniqueName: \"kubernetes.io/projected/c052fbc1-a102-456b-8658-c954fe91534b-kube-api-access-sd4g9\") pod \"barbican-8d89-account-create-update-8fw8c\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.571150 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887f886a-9541-4075-9d32-0d8feaf32722-operator-scripts\") pod \"heat-4d8b-account-create-update-hg9ms\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.597090 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c052fbc1-a102-456b-8658-c954fe91534b-operator-scripts\") pod \"barbican-8d89-account-create-update-8fw8c\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.597706 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7074267-6514-4b90-9aef-a4df05b52054-operator-scripts\") pod \"heat-db-create-5stnz\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.603182 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-config-data\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.612811 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-combined-ca-bundle\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.641780 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-br7rn"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.643578 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdbg2\" (UniqueName: \"kubernetes.io/projected/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-kube-api-access-pdbg2\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.645356 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6gwr\" (UniqueName: \"kubernetes.io/projected/d7074267-6514-4b90-9aef-a4df05b52054-kube-api-access-f6gwr\") pod \"heat-db-create-5stnz\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.648168 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd4g9\" (UniqueName: \"kubernetes.io/projected/c052fbc1-a102-456b-8658-c954fe91534b-kube-api-access-sd4g9\") pod \"barbican-8d89-account-create-update-8fw8c\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.652773 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbjjr\" (UniqueName: \"kubernetes.io/projected/887f886a-9541-4075-9d32-0d8feaf32722-kube-api-access-cbjjr\") pod \"heat-4d8b-account-create-update-hg9ms\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.707910 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2615-account-create-update-8xhkc"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.709034 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.712567 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.720894 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2615-account-create-update-8xhkc"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.721029 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.747317 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.754187 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-br7rn"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.760313 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.791925 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.815059 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jvxh\" (UniqueName: \"kubernetes.io/projected/0fc487cd-a539-4daa-8c13-40d0cea82770-kube-api-access-2jvxh\") pod \"neutron-db-create-br7rn\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.815500 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-operator-scripts\") pod \"neutron-2615-account-create-update-8xhkc\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.816294 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fc487cd-a539-4daa-8c13-40d0cea82770-operator-scripts\") pod \"neutron-db-create-br7rn\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.816641 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7lz6\" (UniqueName: \"kubernetes.io/projected/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-kube-api-access-k7lz6\") pod \"neutron-2615-account-create-update-8xhkc\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.880845 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.919237 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fc487cd-a539-4daa-8c13-40d0cea82770-operator-scripts\") pod \"neutron-db-create-br7rn\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.919522 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7lz6\" (UniqueName: \"kubernetes.io/projected/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-kube-api-access-k7lz6\") pod \"neutron-2615-account-create-update-8xhkc\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.919763 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jvxh\" (UniqueName: \"kubernetes.io/projected/0fc487cd-a539-4daa-8c13-40d0cea82770-kube-api-access-2jvxh\") pod \"neutron-db-create-br7rn\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.919968 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-operator-scripts\") pod \"neutron-2615-account-create-update-8xhkc\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.921656 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fc487cd-a539-4daa-8c13-40d0cea82770-operator-scripts\") pod \"neutron-db-create-br7rn\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.923077 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-operator-scripts\") pod \"neutron-2615-account-create-update-8xhkc\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.950190 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7lz6\" (UniqueName: \"kubernetes.io/projected/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-kube-api-access-k7lz6\") pod \"neutron-2615-account-create-update-8xhkc\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.950206 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jvxh\" (UniqueName: \"kubernetes.io/projected/0fc487cd-a539-4daa-8c13-40d0cea82770-kube-api-access-2jvxh\") pod \"neutron-db-create-br7rn\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.034369 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-888tv"] Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.130562 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.152359 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.223292 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4fswm"] Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.299047 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2623-account-create-update-nvftp"] Jan 28 18:36:33 crc kubenswrapper[4985]: W0128 18:36:33.336478 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod768c2a33_259c_4194_ad30_8edffff92f18.slice/crio-ecb3d72abfb6529e55bef966e16c1f2c1354aa8a5b5b348c81d42fb89721fca8 WatchSource:0}: Error finding container ecb3d72abfb6529e55bef966e16c1f2c1354aa8a5b5b348c81d42fb89721fca8: Status 404 returned error can't find the container with id ecb3d72abfb6529e55bef966e16c1f2c1354aa8a5b5b348c81d42fb89721fca8 Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.428498 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"7a3e375cf12b62b77d537920d93c88b87a81ab9b2fcc13e3d4b3a1320640e098"} Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.428549 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"0cc2d532b2530baaebe34b9718d266139d05a97dafff3dd3a0e496b978a9a594"} Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.431899 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2623-account-create-update-nvftp" event={"ID":"768c2a33-259c-4194-ad30-8edffff92f18","Type":"ContainerStarted","Data":"ecb3d72abfb6529e55bef966e16c1f2c1354aa8a5b5b348c81d42fb89721fca8"} Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.435153 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4fswm" event={"ID":"6d078ca4-34dd-4a65-a2e4-ffc23f098285","Type":"ContainerStarted","Data":"878c0f310728825bfc3a9f3a42766e3d3fb0ed9db3ca505b2503200e2ee6fa77"} Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.441948 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-888tv" event={"ID":"0a7822ab-0225-4deb-a283-374e32bc995f","Type":"ContainerStarted","Data":"4db841a9fa2f43f46ed12fc0c9a23942efbd002515556520702636ac918f8257"} Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.515988 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5stnz"] Jan 28 18:36:33 crc kubenswrapper[4985]: W0128 18:36:33.521211 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7074267_6514_4b90_9aef_a4df05b52054.slice/crio-75f942ae970ad028b425e9af3a3f818f393271df882679e9573bc257f9498140 WatchSource:0}: Error finding container 75f942ae970ad028b425e9af3a3f818f393271df882679e9573bc257f9498140: Status 404 returned error can't find the container with id 75f942ae970ad028b425e9af3a3f818f393271df882679e9573bc257f9498140 Jan 28 18:36:33 crc kubenswrapper[4985]: W0128 18:36:33.911538 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c3b6ba3_2c25_4da1_b02f_de0e776383c1.slice/crio-1b616f9e3ec4c319170e5680dda343c90b7cda9d924d473f9e17bb899d17b651 WatchSource:0}: Error finding container 1b616f9e3ec4c319170e5680dda343c90b7cda9d924d473f9e17bb899d17b651: Status 404 returned error can't find the container with id 1b616f9e3ec4c319170e5680dda343c90b7cda9d924d473f9e17bb899d17b651 Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.923508 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-49fs2"] Jan 28 18:36:33 crc kubenswrapper[4985]: W0128 18:36:33.926994 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod887f886a_9541_4075_9d32_0d8feaf32722.slice/crio-984e4e85639a956b60501d757e9602c30171f0c99cac004139cea3d3065189ed WatchSource:0}: Error finding container 984e4e85639a956b60501d757e9602c30171f0c99cac004139cea3d3065189ed: Status 404 returned error can't find the container with id 984e4e85639a956b60501d757e9602c30171f0c99cac004139cea3d3065189ed Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.951602 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-4d8b-account-create-update-hg9ms"] Jan 28 18:36:33 crc kubenswrapper[4985]: W0128 18:36:33.968344 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc052fbc1_a102_456b_8658_c954fe91534b.slice/crio-1f14ae2db62227ad2df0eb4aff6945386761f0321f1e22dc06d06af0bbe4a107 WatchSource:0}: Error finding container 1f14ae2db62227ad2df0eb4aff6945386761f0321f1e22dc06d06af0bbe4a107: Status 404 returned error can't find the container with id 1f14ae2db62227ad2df0eb4aff6945386761f0321f1e22dc06d06af0bbe4a107 Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.978632 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8d89-account-create-update-8fw8c"] Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.991014 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2615-account-create-update-8xhkc"] Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.083002 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-br7rn"] Jan 28 18:36:34 crc kubenswrapper[4985]: W0128 18:36:34.097744 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fc487cd_a539_4daa_8c13_40d0cea82770.slice/crio-9b481be9c716c9f39beb62f480cfddd2a42621477214ff033e05b2b5b835ffc6 WatchSource:0}: Error finding container 9b481be9c716c9f39beb62f480cfddd2a42621477214ff033e05b2b5b835ffc6: Status 404 returned error can't find the container with id 9b481be9c716c9f39beb62f480cfddd2a42621477214ff033e05b2b5b835ffc6 Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.452651 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4fswm" event={"ID":"6d078ca4-34dd-4a65-a2e4-ffc23f098285","Type":"ContainerStarted","Data":"62b40fcabf6fa0fa3594d971ef20837ab76d19a05ef888b27ef59e8e216c6b43"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.455933 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-888tv" event={"ID":"0a7822ab-0225-4deb-a283-374e32bc995f","Type":"ContainerStarted","Data":"d394f63865046e3bed1c13acb76b2d5b90327e2b0f8a9073a210a53855ab1204"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.457680 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-br7rn" event={"ID":"0fc487cd-a539-4daa-8c13-40d0cea82770","Type":"ContainerStarted","Data":"82ff15708c7feba4b50bfae36f824c144bddeb2ec8ddc05a588aede4034d1eb1"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.457720 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-br7rn" event={"ID":"0fc487cd-a539-4daa-8c13-40d0cea82770","Type":"ContainerStarted","Data":"9b481be9c716c9f39beb62f480cfddd2a42621477214ff033e05b2b5b835ffc6"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.460757 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4d8b-account-create-update-hg9ms" event={"ID":"887f886a-9541-4075-9d32-0d8feaf32722","Type":"ContainerStarted","Data":"f7f9efcfdd23e8d8635c4c036c55b162db6c57b666261780d55e532d672c4438"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.460811 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4d8b-account-create-update-hg9ms" event={"ID":"887f886a-9541-4075-9d32-0d8feaf32722","Type":"ContainerStarted","Data":"984e4e85639a956b60501d757e9602c30171f0c99cac004139cea3d3065189ed"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.465707 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5stnz" event={"ID":"d7074267-6514-4b90-9aef-a4df05b52054","Type":"ContainerStarted","Data":"92ba33b439db2a5df5ff34914eff515d7a059caada35a79afe448a92f1201c1e"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.465765 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5stnz" event={"ID":"d7074267-6514-4b90-9aef-a4df05b52054","Type":"ContainerStarted","Data":"75f942ae970ad028b425e9af3a3f818f393271df882679e9573bc257f9498140"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.470153 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d89-account-create-update-8fw8c" event={"ID":"c052fbc1-a102-456b-8658-c954fe91534b","Type":"ContainerStarted","Data":"0ab08bac76909d1e142ea94f2076118980c9731dca96c80e8289000d98f0d6ce"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.470214 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d89-account-create-update-8fw8c" event={"ID":"c052fbc1-a102-456b-8658-c954fe91534b","Type":"ContainerStarted","Data":"1f14ae2db62227ad2df0eb4aff6945386761f0321f1e22dc06d06af0bbe4a107"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.472149 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-49fs2" event={"ID":"6c3b6ba3-2c25-4da1-b02f-de0e776383c1","Type":"ContainerStarted","Data":"1b616f9e3ec4c319170e5680dda343c90b7cda9d924d473f9e17bb899d17b651"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.473305 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-4fswm" podStartSLOduration=2.473279469 podStartE2EDuration="2.473279469s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.46976641 +0000 UTC m=+1405.296329231" watchObservedRunningTime="2026-01-28 18:36:34.473279469 +0000 UTC m=+1405.299842310" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.475286 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2615-account-create-update-8xhkc" event={"ID":"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2","Type":"ContainerStarted","Data":"fc0b5d4f8a27e5da50b50ceabdadd101d74be078c6014be172f85e01027bd9af"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.475357 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2615-account-create-update-8xhkc" event={"ID":"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2","Type":"ContainerStarted","Data":"a168fe30db9e1f0ecb67e71918d9ed1854222d5e171487ecaf9036aefbf99081"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.484475 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"dbb518cab5a475ed6aa31748656a73c8cab2f8878123d8f312714ec43804fa4c"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.484520 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"04e7cc17bd0f13ac1e9e12cf6ab2e9775bdddb78309ecd4b7396742d6ad1664e"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.490220 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2623-account-create-update-nvftp" event={"ID":"768c2a33-259c-4194-ad30-8edffff92f18","Type":"ContainerStarted","Data":"6f81b27fc2e7a5ce52780bd694a1d7b0af6de17e38f2a816f35448cc2f8e93b0"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.503458 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-4d8b-account-create-update-hg9ms" podStartSLOduration=2.50343401 podStartE2EDuration="2.50343401s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.482943782 +0000 UTC m=+1405.309506603" watchObservedRunningTime="2026-01-28 18:36:34.50343401 +0000 UTC m=+1405.329996831" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.506299 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-888tv" podStartSLOduration=3.5062807510000003 podStartE2EDuration="3.506280751s" podCreationTimestamp="2026-01-28 18:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.496637798 +0000 UTC m=+1405.323200639" watchObservedRunningTime="2026-01-28 18:36:34.506280751 +0000 UTC m=+1405.332843582" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.543573 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-br7rn" podStartSLOduration=2.5435514230000003 podStartE2EDuration="2.543551423s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.511726504 +0000 UTC m=+1405.338289325" watchObservedRunningTime="2026-01-28 18:36:34.543551423 +0000 UTC m=+1405.370114264" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.577935 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-5stnz" podStartSLOduration=2.577916253 podStartE2EDuration="2.577916253s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.525576735 +0000 UTC m=+1405.352139556" watchObservedRunningTime="2026-01-28 18:36:34.577916253 +0000 UTC m=+1405.404479064" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.583440 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-2615-account-create-update-8xhkc" podStartSLOduration=2.583431209 podStartE2EDuration="2.583431209s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.54063366 +0000 UTC m=+1405.367196491" watchObservedRunningTime="2026-01-28 18:36:34.583431209 +0000 UTC m=+1405.409994020" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.587429 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-2623-account-create-update-nvftp" podStartSLOduration=2.587420721 podStartE2EDuration="2.587420721s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.555280334 +0000 UTC m=+1405.381843155" watchObservedRunningTime="2026-01-28 18:36:34.587420721 +0000 UTC m=+1405.413983542" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.599634 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-8d89-account-create-update-8fw8c" podStartSLOduration=2.599615496 podStartE2EDuration="2.599615496s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.569044803 +0000 UTC m=+1405.395607624" watchObservedRunningTime="2026-01-28 18:36:34.599615496 +0000 UTC m=+1405.426178317" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.620131 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.276589837 podStartE2EDuration="58.620106744s" podCreationTimestamp="2026-01-28 18:35:36 +0000 UTC" firstStartedPulling="2026-01-28 18:36:10.043712925 +0000 UTC m=+1380.870275746" lastFinishedPulling="2026-01-28 18:36:31.387229832 +0000 UTC m=+1402.213792653" observedRunningTime="2026-01-28 18:36:34.612383716 +0000 UTC m=+1405.438946537" watchObservedRunningTime="2026-01-28 18:36:34.620106744 +0000 UTC m=+1405.446669555" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.901853 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cv528"] Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.904042 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.912244 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.919773 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cv528"] Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.094592 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.094656 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.094998 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbssj\" (UniqueName: \"kubernetes.io/projected/51c32b56-4c7e-47e9-b47e-7bcf6295d854-kube-api-access-tbssj\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.095165 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.095611 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.095670 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-config\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: E0128 18:36:35.188432 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.198027 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.198091 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.198177 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbssj\" (UniqueName: \"kubernetes.io/projected/51c32b56-4c7e-47e9-b47e-7bcf6295d854-kube-api-access-tbssj\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.198205 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.198708 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.198751 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-config\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.199315 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.199536 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-config\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.200204 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.200467 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.200879 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.220394 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbssj\" (UniqueName: \"kubernetes.io/projected/51c32b56-4c7e-47e9-b47e-7bcf6295d854-kube-api-access-tbssj\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.232285 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.501515 4985 generic.go:334] "Generic (PLEG): container finished" podID="6d078ca4-34dd-4a65-a2e4-ffc23f098285" containerID="62b40fcabf6fa0fa3594d971ef20837ab76d19a05ef888b27ef59e8e216c6b43" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.502224 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4fswm" event={"ID":"6d078ca4-34dd-4a65-a2e4-ffc23f098285","Type":"ContainerDied","Data":"62b40fcabf6fa0fa3594d971ef20837ab76d19a05ef888b27ef59e8e216c6b43"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.506037 4985 generic.go:334] "Generic (PLEG): container finished" podID="0a7822ab-0225-4deb-a283-374e32bc995f" containerID="d394f63865046e3bed1c13acb76b2d5b90327e2b0f8a9073a210a53855ab1204" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.506096 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-888tv" event={"ID":"0a7822ab-0225-4deb-a283-374e32bc995f","Type":"ContainerDied","Data":"d394f63865046e3bed1c13acb76b2d5b90327e2b0f8a9073a210a53855ab1204"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.508145 4985 generic.go:334] "Generic (PLEG): container finished" podID="3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" containerID="fc0b5d4f8a27e5da50b50ceabdadd101d74be078c6014be172f85e01027bd9af" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.508229 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2615-account-create-update-8xhkc" event={"ID":"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2","Type":"ContainerDied","Data":"fc0b5d4f8a27e5da50b50ceabdadd101d74be078c6014be172f85e01027bd9af"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.509692 4985 generic.go:334] "Generic (PLEG): container finished" podID="0fc487cd-a539-4daa-8c13-40d0cea82770" containerID="82ff15708c7feba4b50bfae36f824c144bddeb2ec8ddc05a588aede4034d1eb1" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.509761 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-br7rn" event={"ID":"0fc487cd-a539-4daa-8c13-40d0cea82770","Type":"ContainerDied","Data":"82ff15708c7feba4b50bfae36f824c144bddeb2ec8ddc05a588aede4034d1eb1"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.511875 4985 generic.go:334] "Generic (PLEG): container finished" podID="887f886a-9541-4075-9d32-0d8feaf32722" containerID="f7f9efcfdd23e8d8635c4c036c55b162db6c57b666261780d55e532d672c4438" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.511948 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4d8b-account-create-update-hg9ms" event={"ID":"887f886a-9541-4075-9d32-0d8feaf32722","Type":"ContainerDied","Data":"f7f9efcfdd23e8d8635c4c036c55b162db6c57b666261780d55e532d672c4438"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.514808 4985 generic.go:334] "Generic (PLEG): container finished" podID="768c2a33-259c-4194-ad30-8edffff92f18" containerID="6f81b27fc2e7a5ce52780bd694a1d7b0af6de17e38f2a816f35448cc2f8e93b0" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.514898 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2623-account-create-update-nvftp" event={"ID":"768c2a33-259c-4194-ad30-8edffff92f18","Type":"ContainerDied","Data":"6f81b27fc2e7a5ce52780bd694a1d7b0af6de17e38f2a816f35448cc2f8e93b0"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.520806 4985 generic.go:334] "Generic (PLEG): container finished" podID="d7074267-6514-4b90-9aef-a4df05b52054" containerID="92ba33b439db2a5df5ff34914eff515d7a059caada35a79afe448a92f1201c1e" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.520945 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5stnz" event={"ID":"d7074267-6514-4b90-9aef-a4df05b52054","Type":"ContainerDied","Data":"92ba33b439db2a5df5ff34914eff515d7a059caada35a79afe448a92f1201c1e"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.522640 4985 generic.go:334] "Generic (PLEG): container finished" podID="c052fbc1-a102-456b-8658-c954fe91534b" containerID="0ab08bac76909d1e142ea94f2076118980c9731dca96c80e8289000d98f0d6ce" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.522710 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d89-account-create-update-8fw8c" event={"ID":"c052fbc1-a102-456b-8658-c954fe91534b","Type":"ContainerDied","Data":"0ab08bac76909d1e142ea94f2076118980c9731dca96c80e8289000d98f0d6ce"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.786426 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cv528"] Jan 28 18:36:36 crc kubenswrapper[4985]: I0128 18:36:36.535614 4985 generic.go:334] "Generic (PLEG): container finished" podID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerID="d7223a7a628a68fecc17a7f4ec70d47a10ad7c02ac73f8bb90091f9b898b7963" exitCode=0 Jan 28 18:36:36 crc kubenswrapper[4985]: I0128 18:36:36.535890 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" event={"ID":"51c32b56-4c7e-47e9-b47e-7bcf6295d854","Type":"ContainerDied","Data":"d7223a7a628a68fecc17a7f4ec70d47a10ad7c02ac73f8bb90091f9b898b7963"} Jan 28 18:36:36 crc kubenswrapper[4985]: I0128 18:36:36.542428 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" event={"ID":"51c32b56-4c7e-47e9-b47e-7bcf6295d854","Type":"ContainerStarted","Data":"c65b2c3c36b7551d10c8a76b6864da53073d25c462caf52ecb94744b028234fc"} Jan 28 18:36:37 crc kubenswrapper[4985]: I0128 18:36:37.320350 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:37 crc kubenswrapper[4985]: I0128 18:36:37.323469 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.154654 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5stnz" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.165523 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.175147 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.299560 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6gwr\" (UniqueName: \"kubernetes.io/projected/d7074267-6514-4b90-9aef-a4df05b52054-kube-api-access-f6gwr\") pod \"d7074267-6514-4b90-9aef-a4df05b52054\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.299751 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768c2a33-259c-4194-ad30-8edffff92f18-operator-scripts\") pod \"768c2a33-259c-4194-ad30-8edffff92f18\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.299885 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887f886a-9541-4075-9d32-0d8feaf32722-operator-scripts\") pod \"887f886a-9541-4075-9d32-0d8feaf32722\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.300040 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7074267-6514-4b90-9aef-a4df05b52054-operator-scripts\") pod \"d7074267-6514-4b90-9aef-a4df05b52054\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.300082 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbjjr\" (UniqueName: \"kubernetes.io/projected/887f886a-9541-4075-9d32-0d8feaf32722-kube-api-access-cbjjr\") pod \"887f886a-9541-4075-9d32-0d8feaf32722\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.300300 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bv6n\" (UniqueName: \"kubernetes.io/projected/768c2a33-259c-4194-ad30-8edffff92f18-kube-api-access-7bv6n\") pod \"768c2a33-259c-4194-ad30-8edffff92f18\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.300716 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/768c2a33-259c-4194-ad30-8edffff92f18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "768c2a33-259c-4194-ad30-8edffff92f18" (UID: "768c2a33-259c-4194-ad30-8edffff92f18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.301111 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7074267-6514-4b90-9aef-a4df05b52054-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7074267-6514-4b90-9aef-a4df05b52054" (UID: "d7074267-6514-4b90-9aef-a4df05b52054"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.301358 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768c2a33-259c-4194-ad30-8edffff92f18-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.301380 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7074267-6514-4b90-9aef-a4df05b52054-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.301511 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/887f886a-9541-4075-9d32-0d8feaf32722-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "887f886a-9541-4075-9d32-0d8feaf32722" (UID: "887f886a-9541-4075-9d32-0d8feaf32722"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.305459 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7074267-6514-4b90-9aef-a4df05b52054-kube-api-access-f6gwr" (OuterVolumeSpecName: "kube-api-access-f6gwr") pod "d7074267-6514-4b90-9aef-a4df05b52054" (UID: "d7074267-6514-4b90-9aef-a4df05b52054"). InnerVolumeSpecName "kube-api-access-f6gwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.305713 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/768c2a33-259c-4194-ad30-8edffff92f18-kube-api-access-7bv6n" (OuterVolumeSpecName: "kube-api-access-7bv6n") pod "768c2a33-259c-4194-ad30-8edffff92f18" (UID: "768c2a33-259c-4194-ad30-8edffff92f18"). InnerVolumeSpecName "kube-api-access-7bv6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.306873 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887f886a-9541-4075-9d32-0d8feaf32722-kube-api-access-cbjjr" (OuterVolumeSpecName: "kube-api-access-cbjjr") pod "887f886a-9541-4075-9d32-0d8feaf32722" (UID: "887f886a-9541-4075-9d32-0d8feaf32722"). InnerVolumeSpecName "kube-api-access-cbjjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.351026 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.401470 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.403146 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887f886a-9541-4075-9d32-0d8feaf32722-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.403165 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbjjr\" (UniqueName: \"kubernetes.io/projected/887f886a-9541-4075-9d32-0d8feaf32722-kube-api-access-cbjjr\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.403175 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bv6n\" (UniqueName: \"kubernetes.io/projected/768c2a33-259c-4194-ad30-8edffff92f18-kube-api-access-7bv6n\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.403185 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6gwr\" (UniqueName: \"kubernetes.io/projected/d7074267-6514-4b90-9aef-a4df05b52054-kube-api-access-f6gwr\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.455569 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.481425 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.490516 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-888tv" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.511714 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c052fbc1-a102-456b-8658-c954fe91534b-operator-scripts\") pod \"c052fbc1-a102-456b-8658-c954fe91534b\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.511937 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd4g9\" (UniqueName: \"kubernetes.io/projected/c052fbc1-a102-456b-8658-c954fe91534b-kube-api-access-sd4g9\") pod \"c052fbc1-a102-456b-8658-c954fe91534b\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.511995 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7lz6\" (UniqueName: \"kubernetes.io/projected/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-kube-api-access-k7lz6\") pod \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.512152 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-operator-scripts\") pod \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.512718 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c052fbc1-a102-456b-8658-c954fe91534b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c052fbc1-a102-456b-8658-c954fe91534b" (UID: "c052fbc1-a102-456b-8658-c954fe91534b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.513172 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c052fbc1-a102-456b-8658-c954fe91534b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.516210 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" (UID: "3bd289b0-2807-4b7e-bdc0-300fe0ce09b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.530890 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-kube-api-access-k7lz6" (OuterVolumeSpecName: "kube-api-access-k7lz6") pod "3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" (UID: "3bd289b0-2807-4b7e-bdc0-300fe0ce09b2"). InnerVolumeSpecName "kube-api-access-k7lz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.531380 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c052fbc1-a102-456b-8658-c954fe91534b-kube-api-access-sd4g9" (OuterVolumeSpecName: "kube-api-access-sd4g9") pod "c052fbc1-a102-456b-8658-c954fe91534b" (UID: "c052fbc1-a102-456b-8658-c954fe91534b"). InnerVolumeSpecName "kube-api-access-sd4g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.648026 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d078ca4-34dd-4a65-a2e4-ffc23f098285-operator-scripts\") pod \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.648114 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fc487cd-a539-4daa-8c13-40d0cea82770-operator-scripts\") pod \"0fc487cd-a539-4daa-8c13-40d0cea82770\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.648167 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nh2f\" (UniqueName: \"kubernetes.io/projected/0a7822ab-0225-4deb-a283-374e32bc995f-kube-api-access-9nh2f\") pod \"0a7822ab-0225-4deb-a283-374e32bc995f\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.648205 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jvxh\" (UniqueName: \"kubernetes.io/projected/0fc487cd-a539-4daa-8c13-40d0cea82770-kube-api-access-2jvxh\") pod \"0fc487cd-a539-4daa-8c13-40d0cea82770\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.648341 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sznc6\" (UniqueName: \"kubernetes.io/projected/6d078ca4-34dd-4a65-a2e4-ffc23f098285-kube-api-access-sznc6\") pod \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.648440 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7822ab-0225-4deb-a283-374e32bc995f-operator-scripts\") pod \"0a7822ab-0225-4deb-a283-374e32bc995f\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.649652 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.649675 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd4g9\" (UniqueName: \"kubernetes.io/projected/c052fbc1-a102-456b-8658-c954fe91534b-kube-api-access-sd4g9\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.649684 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7lz6\" (UniqueName: \"kubernetes.io/projected/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-kube-api-access-k7lz6\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.654520 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d078ca4-34dd-4a65-a2e4-ffc23f098285-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d078ca4-34dd-4a65-a2e4-ffc23f098285" (UID: "6d078ca4-34dd-4a65-a2e4-ffc23f098285"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.655381 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc487cd-a539-4daa-8c13-40d0cea82770-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fc487cd-a539-4daa-8c13-40d0cea82770" (UID: "0fc487cd-a539-4daa-8c13-40d0cea82770"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.656396 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.656548 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-br7rn" event={"ID":"0fc487cd-a539-4daa-8c13-40d0cea82770","Type":"ContainerDied","Data":"9b481be9c716c9f39beb62f480cfddd2a42621477214ff033e05b2b5b835ffc6"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.656591 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b481be9c716c9f39beb62f480cfddd2a42621477214ff033e05b2b5b835ffc6" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.660458 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7822ab-0225-4deb-a283-374e32bc995f-kube-api-access-9nh2f" (OuterVolumeSpecName: "kube-api-access-9nh2f") pod "0a7822ab-0225-4deb-a283-374e32bc995f" (UID: "0a7822ab-0225-4deb-a283-374e32bc995f"). InnerVolumeSpecName "kube-api-access-9nh2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.660908 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7822ab-0225-4deb-a283-374e32bc995f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a7822ab-0225-4deb-a283-374e32bc995f" (UID: "0a7822ab-0225-4deb-a283-374e32bc995f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.670694 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d078ca4-34dd-4a65-a2e4-ffc23f098285-kube-api-access-sznc6" (OuterVolumeSpecName: "kube-api-access-sznc6") pod "6d078ca4-34dd-4a65-a2e4-ffc23f098285" (UID: "6d078ca4-34dd-4a65-a2e4-ffc23f098285"). InnerVolumeSpecName "kube-api-access-sznc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.670892 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fc487cd-a539-4daa-8c13-40d0cea82770-kube-api-access-2jvxh" (OuterVolumeSpecName: "kube-api-access-2jvxh") pod "0fc487cd-a539-4daa-8c13-40d0cea82770" (UID: "0fc487cd-a539-4daa-8c13-40d0cea82770"). InnerVolumeSpecName "kube-api-access-2jvxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.671689 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2623-account-create-update-nvftp" event={"ID":"768c2a33-259c-4194-ad30-8edffff92f18","Type":"ContainerDied","Data":"ecb3d72abfb6529e55bef966e16c1f2c1354aa8a5b5b348c81d42fb89721fca8"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.671739 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecb3d72abfb6529e55bef966e16c1f2c1354aa8a5b5b348c81d42fb89721fca8" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.671827 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.706402 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.754647 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d078ca4-34dd-4a65-a2e4-ffc23f098285-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.754677 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fc487cd-a539-4daa-8c13-40d0cea82770-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.754688 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nh2f\" (UniqueName: \"kubernetes.io/projected/0a7822ab-0225-4deb-a283-374e32bc995f-kube-api-access-9nh2f\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.754699 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jvxh\" (UniqueName: \"kubernetes.io/projected/0fc487cd-a539-4daa-8c13-40d0cea82770-kube-api-access-2jvxh\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.754708 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sznc6\" (UniqueName: \"kubernetes.io/projected/6d078ca4-34dd-4a65-a2e4-ffc23f098285-kube-api-access-sznc6\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.754716 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7822ab-0225-4deb-a283-374e32bc995f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.755580 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-888tv" event={"ID":"0a7822ab-0225-4deb-a283-374e32bc995f","Type":"ContainerDied","Data":"4db841a9fa2f43f46ed12fc0c9a23942efbd002515556520702636ac918f8257"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.755621 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4db841a9fa2f43f46ed12fc0c9a23942efbd002515556520702636ac918f8257" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.755716 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-888tv" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.761364 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.614142907 podStartE2EDuration="1m34.761348512s" podCreationTimestamp="2026-01-28 18:35:05 +0000 UTC" firstStartedPulling="2026-01-28 18:35:25.011013359 +0000 UTC m=+1335.837576180" lastFinishedPulling="2026-01-28 18:36:39.158218954 +0000 UTC m=+1409.984781785" observedRunningTime="2026-01-28 18:36:39.759531651 +0000 UTC m=+1410.586094462" watchObservedRunningTime="2026-01-28 18:36:39.761348512 +0000 UTC m=+1410.587911333" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.771624 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" event={"ID":"51c32b56-4c7e-47e9-b47e-7bcf6295d854","Type":"ContainerStarted","Data":"9509d6e218ba21bbc37656ba000006afdb482de8a139625efa29d73de7dc2a95"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.771847 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.799892 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4fswm" event={"ID":"6d078ca4-34dd-4a65-a2e4-ffc23f098285","Type":"ContainerDied","Data":"878c0f310728825bfc3a9f3a42766e3d3fb0ed9db3ca505b2503200e2ee6fa77"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.799944 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="878c0f310728825bfc3a9f3a42766e3d3fb0ed9db3ca505b2503200e2ee6fa77" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.800047 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.831595 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2615-account-create-update-8xhkc" event={"ID":"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2","Type":"ContainerDied","Data":"a168fe30db9e1f0ecb67e71918d9ed1854222d5e171487ecaf9036aefbf99081"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.831634 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a168fe30db9e1f0ecb67e71918d9ed1854222d5e171487ecaf9036aefbf99081" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.831717 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.878811 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" podStartSLOduration=5.878789847 podStartE2EDuration="5.878789847s" podCreationTimestamp="2026-01-28 18:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:39.866799169 +0000 UTC m=+1410.693361990" watchObservedRunningTime="2026-01-28 18:36:39.878789847 +0000 UTC m=+1410.705352668" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.890795 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4d8b-account-create-update-hg9ms" event={"ID":"887f886a-9541-4075-9d32-0d8feaf32722","Type":"ContainerDied","Data":"984e4e85639a956b60501d757e9602c30171f0c99cac004139cea3d3065189ed"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.890835 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="984e4e85639a956b60501d757e9602c30171f0c99cac004139cea3d3065189ed" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.890914 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.900224 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5stnz" event={"ID":"d7074267-6514-4b90-9aef-a4df05b52054","Type":"ContainerDied","Data":"75f942ae970ad028b425e9af3a3f818f393271df882679e9573bc257f9498140"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.900278 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f942ae970ad028b425e9af3a3f818f393271df882679e9573bc257f9498140" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.900347 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5stnz" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.908368 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d89-account-create-update-8fw8c" event={"ID":"c052fbc1-a102-456b-8658-c954fe91534b","Type":"ContainerDied","Data":"1f14ae2db62227ad2df0eb4aff6945386761f0321f1e22dc06d06af0bbe4a107"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.908419 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f14ae2db62227ad2df0eb4aff6945386761f0321f1e22dc06d06af0bbe4a107" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.908513 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:40 crc kubenswrapper[4985]: E0128 18:36:40.690723 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:40 crc kubenswrapper[4985]: I0128 18:36:40.920640 4985 generic.go:334] "Generic (PLEG): container finished" podID="229b9159-df89-4859-b5f3-d34b2759d0fd" containerID="8d83ae610aea076db41903e479372673c489635bc359f8ba503ad92865568b4d" exitCode=0 Jan 28 18:36:40 crc kubenswrapper[4985]: I0128 18:36:40.920674 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5q5qm" event={"ID":"229b9159-df89-4859-b5f3-d34b2759d0fd","Type":"ContainerDied","Data":"8d83ae610aea076db41903e479372673c489635bc359f8ba503ad92865568b4d"} Jan 28 18:36:40 crc kubenswrapper[4985]: I0128 18:36:40.924869 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerStarted","Data":"66f1056465a2a42e3f35e272ee20feffc3abdbca774c043c1fecefff9950bd98"} Jan 28 18:36:40 crc kubenswrapper[4985]: I0128 18:36:40.927010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-49fs2" event={"ID":"6c3b6ba3-2c25-4da1-b02f-de0e776383c1","Type":"ContainerStarted","Data":"ef6310844d9eb58852520a7287dfca2d3780f36ea565d58fea9a7e00a7b9506b"} Jan 28 18:36:40 crc kubenswrapper[4985]: I0128 18:36:40.965614 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-49fs2" podStartSLOduration=3.723944768 podStartE2EDuration="8.96557827s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="2026-01-28 18:36:33.916270753 +0000 UTC m=+1404.742833574" lastFinishedPulling="2026-01-28 18:36:39.157904255 +0000 UTC m=+1409.984467076" observedRunningTime="2026-01-28 18:36:40.955432184 +0000 UTC m=+1411.781995015" watchObservedRunningTime="2026-01-28 18:36:40.96557827 +0000 UTC m=+1411.792141091" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.326852 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.469106 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.539505 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-config-data\") pod \"229b9159-df89-4859-b5f3-d34b2759d0fd\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.539645 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drvrl\" (UniqueName: \"kubernetes.io/projected/229b9159-df89-4859-b5f3-d34b2759d0fd-kube-api-access-drvrl\") pod \"229b9159-df89-4859-b5f3-d34b2759d0fd\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.539932 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-db-sync-config-data\") pod \"229b9159-df89-4859-b5f3-d34b2759d0fd\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.540013 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-combined-ca-bundle\") pod \"229b9159-df89-4859-b5f3-d34b2759d0fd\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.544987 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "229b9159-df89-4859-b5f3-d34b2759d0fd" (UID: "229b9159-df89-4859-b5f3-d34b2759d0fd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.551504 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/229b9159-df89-4859-b5f3-d34b2759d0fd-kube-api-access-drvrl" (OuterVolumeSpecName: "kube-api-access-drvrl") pod "229b9159-df89-4859-b5f3-d34b2759d0fd" (UID: "229b9159-df89-4859-b5f3-d34b2759d0fd"). InnerVolumeSpecName "kube-api-access-drvrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.569160 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "229b9159-df89-4859-b5f3-d34b2759d0fd" (UID: "229b9159-df89-4859-b5f3-d34b2759d0fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.607895 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-config-data" (OuterVolumeSpecName: "config-data") pod "229b9159-df89-4859-b5f3-d34b2759d0fd" (UID: "229b9159-df89-4859-b5f3-d34b2759d0fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.642100 4985 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.642143 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.642157 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.642170 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drvrl\" (UniqueName: \"kubernetes.io/projected/229b9159-df89-4859-b5f3-d34b2759d0fd-kube-api-access-drvrl\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.947555 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.947956 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="config-reloader" containerID="cri-o://d6979a9489721d74b8d4664bdfe5df656096756724de155696b85d31e7a0e2dd" gracePeriod=600 Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.947995 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5q5qm" event={"ID":"229b9159-df89-4859-b5f3-d34b2759d0fd","Type":"ContainerDied","Data":"08b2b218ba733f91c11c5e317ad93617dac7e3c043b5d4fce759166ed128bc09"} Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.948033 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08b2b218ba733f91c11c5e317ad93617dac7e3c043b5d4fce759166ed128bc09" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.947768 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="prometheus" containerID="cri-o://e1a1c6117167cd879db9ae5539bf348a54302f9007388acd00fd5041acda647f" gracePeriod=600 Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.948097 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="thanos-sidecar" containerID="cri-o://66f1056465a2a42e3f35e272ee20feffc3abdbca774c043c1fecefff9950bd98" gracePeriod=600 Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.329953 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cv528"] Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.330184 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerName="dnsmasq-dns" containerID="cri-o://9509d6e218ba21bbc37656ba000006afdb482de8a139625efa29d73de7dc2a95" gracePeriod=10 Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.388640 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rtvmd"] Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389115 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c052fbc1-a102-456b-8658-c954fe91534b" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389131 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c052fbc1-a102-456b-8658-c954fe91534b" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389143 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc487cd-a539-4daa-8c13-40d0cea82770" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389149 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc487cd-a539-4daa-8c13-40d0cea82770" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389158 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7074267-6514-4b90-9aef-a4df05b52054" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389164 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7074267-6514-4b90-9aef-a4df05b52054" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389176 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7822ab-0225-4deb-a283-374e32bc995f" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389182 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7822ab-0225-4deb-a283-374e32bc995f" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389197 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389204 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389215 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="229b9159-df89-4859-b5f3-d34b2759d0fd" containerName="glance-db-sync" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389223 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="229b9159-df89-4859-b5f3-d34b2759d0fd" containerName="glance-db-sync" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389237 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="768c2a33-259c-4194-ad30-8edffff92f18" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389242 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="768c2a33-259c-4194-ad30-8edffff92f18" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389287 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887f886a-9541-4075-9d32-0d8feaf32722" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389294 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="887f886a-9541-4075-9d32-0d8feaf32722" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389304 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d078ca4-34dd-4a65-a2e4-ffc23f098285" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389309 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d078ca4-34dd-4a65-a2e4-ffc23f098285" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389485 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="887f886a-9541-4075-9d32-0d8feaf32722" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389500 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c052fbc1-a102-456b-8658-c954fe91534b" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389513 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d078ca4-34dd-4a65-a2e4-ffc23f098285" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389526 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="768c2a33-259c-4194-ad30-8edffff92f18" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389535 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7074267-6514-4b90-9aef-a4df05b52054" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389544 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="229b9159-df89-4859-b5f3-d34b2759d0fd" containerName="glance-db-sync" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389555 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389565 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7822ab-0225-4deb-a283-374e32bc995f" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389577 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fc487cd-a539-4daa-8c13-40d0cea82770" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.390632 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.428546 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rtvmd"] Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.461467 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.461528 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxmt9\" (UniqueName: \"kubernetes.io/projected/f0fb3881-97de-41ce-a664-51e5d4dea3e1-kube-api-access-pxmt9\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.461591 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.461645 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.461679 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-config\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.461729 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.563142 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.563225 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.563280 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-config\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.563329 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.563376 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.563401 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxmt9\" (UniqueName: \"kubernetes.io/projected/f0fb3881-97de-41ce-a664-51e5d4dea3e1-kube-api-access-pxmt9\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.564107 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.564232 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.565559 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.565767 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.567026 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-config\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.599290 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxmt9\" (UniqueName: \"kubernetes.io/projected/f0fb3881-97de-41ce-a664-51e5d4dea3e1-kube-api-access-pxmt9\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.874836 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.005826 4985 generic.go:334] "Generic (PLEG): container finished" podID="96162e6f-966d-438d-9362-ef03abc4b277" containerID="66f1056465a2a42e3f35e272ee20feffc3abdbca774c043c1fecefff9950bd98" exitCode=0 Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006147 4985 generic.go:334] "Generic (PLEG): container finished" podID="96162e6f-966d-438d-9362-ef03abc4b277" containerID="d6979a9489721d74b8d4664bdfe5df656096756724de155696b85d31e7a0e2dd" exitCode=0 Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006162 4985 generic.go:334] "Generic (PLEG): container finished" podID="96162e6f-966d-438d-9362-ef03abc4b277" containerID="e1a1c6117167cd879db9ae5539bf348a54302f9007388acd00fd5041acda647f" exitCode=0 Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006303 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerDied","Data":"66f1056465a2a42e3f35e272ee20feffc3abdbca774c043c1fecefff9950bd98"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006336 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerDied","Data":"d6979a9489721d74b8d4664bdfe5df656096756724de155696b85d31e7a0e2dd"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006353 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerDied","Data":"e1a1c6117167cd879db9ae5539bf348a54302f9007388acd00fd5041acda647f"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006367 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerDied","Data":"e0335762536628c672e38c65f8ba0c729df89b224221c2b13c1cb19cb0e6ee22"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006379 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0335762536628c672e38c65f8ba0c729df89b224221c2b13c1cb19cb0e6ee22" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.008953 4985 generic.go:334] "Generic (PLEG): container finished" podID="6c3b6ba3-2c25-4da1-b02f-de0e776383c1" containerID="ef6310844d9eb58852520a7287dfca2d3780f36ea565d58fea9a7e00a7b9506b" exitCode=0 Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.009014 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-49fs2" event={"ID":"6c3b6ba3-2c25-4da1-b02f-de0e776383c1","Type":"ContainerDied","Data":"ef6310844d9eb58852520a7287dfca2d3780f36ea565d58fea9a7e00a7b9506b"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.012749 4985 generic.go:334] "Generic (PLEG): container finished" podID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerID="9509d6e218ba21bbc37656ba000006afdb482de8a139625efa29d73de7dc2a95" exitCode=0 Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.012782 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" event={"ID":"51c32b56-4c7e-47e9-b47e-7bcf6295d854","Type":"ContainerDied","Data":"9509d6e218ba21bbc37656ba000006afdb482de8a139625efa29d73de7dc2a95"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.012806 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" event={"ID":"51c32b56-4c7e-47e9-b47e-7bcf6295d854","Type":"ContainerDied","Data":"c65b2c3c36b7551d10c8a76b6864da53073d25c462caf52ecb94744b028234fc"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.012816 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c65b2c3c36b7551d10c8a76b6864da53073d25c462caf52ecb94744b028234fc" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.039356 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.041592 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191268 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-svc\") pod \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191320 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-thanos-prometheus-http-client-file\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191356 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbssj\" (UniqueName: \"kubernetes.io/projected/51c32b56-4c7e-47e9-b47e-7bcf6295d854-kube-api-access-tbssj\") pod \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191393 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-swift-storage-0\") pod \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191419 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv7d7\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-kube-api-access-gv7d7\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191465 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-web-config\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191492 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-nb\") pod \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191524 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-tls-assets\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191549 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-config\") pod \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191625 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-sb\") pod \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191658 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-1\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191718 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96162e6f-966d-438d-9362-ef03abc4b277-config-out\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191746 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-0\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191911 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191951 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-2\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191972 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-config\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.208744 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.212666 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.226559 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.256265 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.288725 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-config" (OuterVolumeSpecName: "config") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.303598 4985 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.303624 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.303635 4985 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.303649 4985 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.303660 4985 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.303849 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-kube-api-access-gv7d7" (OuterVolumeSpecName: "kube-api-access-gv7d7") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "kube-api-access-gv7d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.305826 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96162e6f-966d-438d-9362-ef03abc4b277-config-out" (OuterVolumeSpecName: "config-out") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.314717 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c32b56-4c7e-47e9-b47e-7bcf6295d854-kube-api-access-tbssj" (OuterVolumeSpecName: "kube-api-access-tbssj") pod "51c32b56-4c7e-47e9-b47e-7bcf6295d854" (UID: "51c32b56-4c7e-47e9-b47e-7bcf6295d854"). InnerVolumeSpecName "kube-api-access-tbssj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.321450 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.408571 4985 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96162e6f-966d-438d-9362-ef03abc4b277-config-out\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.409970 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbssj\" (UniqueName: \"kubernetes.io/projected/51c32b56-4c7e-47e9-b47e-7bcf6295d854-kube-api-access-tbssj\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.410050 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv7d7\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-kube-api-access-gv7d7\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.410617 4985 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.427426 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "51c32b56-4c7e-47e9-b47e-7bcf6295d854" (UID: "51c32b56-4c7e-47e9-b47e-7bcf6295d854"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.447402 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-web-config" (OuterVolumeSpecName: "web-config") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.447813 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "51c32b56-4c7e-47e9-b47e-7bcf6295d854" (UID: "51c32b56-4c7e-47e9-b47e-7bcf6295d854"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.460117 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "pvc-8e57ef50-627c-40e8-9faa-6585e96efec9". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.503729 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51c32b56-4c7e-47e9-b47e-7bcf6295d854" (UID: "51c32b56-4c7e-47e9-b47e-7bcf6295d854"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.503769 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-config" (OuterVolumeSpecName: "config") pod "51c32b56-4c7e-47e9-b47e-7bcf6295d854" (UID: "51c32b56-4c7e-47e9-b47e-7bcf6295d854"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.514072 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") on node \"crc\" " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.514106 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.514119 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.514129 4985 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-web-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.514139 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.514147 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.531476 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "51c32b56-4c7e-47e9-b47e-7bcf6295d854" (UID: "51c32b56-4c7e-47e9-b47e-7bcf6295d854"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.574289 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.574589 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8e57ef50-627c-40e8-9faa-6585e96efec9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9") on node "crc" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.582371 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rtvmd"] Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.616220 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.616266 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.028737 4985 generic.go:334] "Generic (PLEG): container finished" podID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerID="b25b93afe5c0b9bcdcecf1bc670732171d335e6245638df0593c3602ff20f598" exitCode=0 Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.028855 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.030915 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" event={"ID":"f0fb3881-97de-41ce-a664-51e5d4dea3e1","Type":"ContainerDied","Data":"b25b93afe5c0b9bcdcecf1bc670732171d335e6245638df0593c3602ff20f598"} Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.030984 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" event={"ID":"f0fb3881-97de-41ce-a664-51e5d4dea3e1","Type":"ContainerStarted","Data":"f74f0bb6300abf03a41f5514522429abdf0847f34f1d56df2ed73e73e25973ab"} Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.031005 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.099163 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cv528"] Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.123652 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cv528"] Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.134097 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.143205 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.154539 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:36:45 crc kubenswrapper[4985]: E0128 18:36:45.155028 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="thanos-sidecar" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155045 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="thanos-sidecar" Jan 28 18:36:45 crc kubenswrapper[4985]: E0128 18:36:45.155062 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="prometheus" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155068 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="prometheus" Jan 28 18:36:45 crc kubenswrapper[4985]: E0128 18:36:45.155095 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="init-config-reloader" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155101 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="init-config-reloader" Jan 28 18:36:45 crc kubenswrapper[4985]: E0128 18:36:45.155108 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerName="init" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155114 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerName="init" Jan 28 18:36:45 crc kubenswrapper[4985]: E0128 18:36:45.155129 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerName="dnsmasq-dns" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155135 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerName="dnsmasq-dns" Jan 28 18:36:45 crc kubenswrapper[4985]: E0128 18:36:45.155157 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="config-reloader" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155163 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="config-reloader" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155372 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="prometheus" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155391 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="thanos-sidecar" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155412 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerName="dnsmasq-dns" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155428 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="config-reloader" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.157901 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.162414 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.162553 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.162650 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-wj229" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.162926 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.162963 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.163102 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.168743 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.170745 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.183848 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.185272 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228108 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228179 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228235 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228353 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228403 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228469 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228558 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pczfz\" (UniqueName: \"kubernetes.io/projected/3d356801-0ed0-4343-87a9-29d23453d621-kube-api-access-pczfz\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228602 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d356801-0ed0-4343-87a9-29d23453d621-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228635 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228660 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d356801-0ed0-4343-87a9-29d23453d621-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228674 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228722 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.276187 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" path="/var/lib/kubelet/pods/51c32b56-4c7e-47e9-b47e-7bcf6295d854/volumes" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.276900 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96162e6f-966d-438d-9362-ef03abc4b277" path="/var/lib/kubelet/pods/96162e6f-966d-438d-9362-ef03abc4b277/volumes" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.331548 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.332799 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.334361 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.334496 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.334527 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.334556 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.334612 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.334841 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.335000 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pczfz\" (UniqueName: \"kubernetes.io/projected/3d356801-0ed0-4343-87a9-29d23453d621-kube-api-access-pczfz\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.335024 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d356801-0ed0-4343-87a9-29d23453d621-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.335096 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.335142 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d356801-0ed0-4343-87a9-29d23453d621-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.335165 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.336164 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.338149 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.339893 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.343912 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.344382 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d356801-0ed0-4343-87a9-29d23453d621-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.347853 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.348026 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d356801-0ed0-4343-87a9-29d23453d621-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.348538 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.349585 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.351629 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.352235 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.352349 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/48fd35393a2bd67e182a1b8f0b6bc712b43ce2f1ef21a21dd138faec48abf12b/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.357797 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pczfz\" (UniqueName: \"kubernetes.io/projected/3d356801-0ed0-4343-87a9-29d23453d621-kube-api-access-pczfz\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.359272 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.410172 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.492737 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.508938 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.568000 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-config-data\") pod \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.568105 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-combined-ca-bundle\") pod \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.568138 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdbg2\" (UniqueName: \"kubernetes.io/projected/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-kube-api-access-pdbg2\") pod \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.577960 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-kube-api-access-pdbg2" (OuterVolumeSpecName: "kube-api-access-pdbg2") pod "6c3b6ba3-2c25-4da1-b02f-de0e776383c1" (UID: "6c3b6ba3-2c25-4da1-b02f-de0e776383c1"). InnerVolumeSpecName "kube-api-access-pdbg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.602838 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c3b6ba3-2c25-4da1-b02f-de0e776383c1" (UID: "6c3b6ba3-2c25-4da1-b02f-de0e776383c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.655346 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-config-data" (OuterVolumeSpecName: "config-data") pod "6c3b6ba3-2c25-4da1-b02f-de0e776383c1" (UID: "6c3b6ba3-2c25-4da1-b02f-de0e776383c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.670816 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.670847 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdbg2\" (UniqueName: \"kubernetes.io/projected/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-kube-api-access-pdbg2\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.670862 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.017155 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.039721 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-49fs2" event={"ID":"6c3b6ba3-2c25-4da1-b02f-de0e776383c1","Type":"ContainerDied","Data":"1b616f9e3ec4c319170e5680dda343c90b7cda9d924d473f9e17bb899d17b651"} Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.039762 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b616f9e3ec4c319170e5680dda343c90b7cda9d924d473f9e17bb899d17b651" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.039813 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.051220 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" event={"ID":"f0fb3881-97de-41ce-a664-51e5d4dea3e1","Type":"ContainerStarted","Data":"a6147749e550936512902312ff84cb22311c72f650197306797ae78d53b6737d"} Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.051394 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.055681 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d356801-0ed0-4343-87a9-29d23453d621","Type":"ContainerStarted","Data":"636462e069d2e5920aa31d8b295f607f9f97f02c2dc1a1b570b5034f342ccb08"} Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.091693 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" podStartSLOduration=3.091668621 podStartE2EDuration="3.091668621s" podCreationTimestamp="2026-01-28 18:36:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:46.080121545 +0000 UTC m=+1416.906684366" watchObservedRunningTime="2026-01-28 18:36:46.091668621 +0000 UTC m=+1416.918231442" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.228606 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rtvmd"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.266782 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-h27v9"] Jan 28 18:36:46 crc kubenswrapper[4985]: E0128 18:36:46.270823 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3b6ba3-2c25-4da1-b02f-de0e776383c1" containerName="keystone-db-sync" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.270866 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3b6ba3-2c25-4da1-b02f-de0e776383c1" containerName="keystone-db-sync" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.281075 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c3b6ba3-2c25-4da1-b02f-de0e776383c1" containerName="keystone-db-sync" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.283015 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.288661 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.289076 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.289382 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.290098 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g7p4d" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.290345 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.312507 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-tgjz6"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.317852 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.326690 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h27v9"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.386188 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-tgjz6"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393712 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7vbd\" (UniqueName: \"kubernetes.io/projected/edd90323-75fd-4b14-8cba-b1db7a93c2e2-kube-api-access-m7vbd\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393760 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzfj8\" (UniqueName: \"kubernetes.io/projected/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-kube-api-access-qzfj8\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393782 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-credential-keys\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393923 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393951 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-config-data\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393999 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.394015 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-fernet-keys\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.394039 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.394066 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-scripts\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.394088 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-combined-ca-bundle\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.394107 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.487344 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-qjrfx"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.488704 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502529 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502567 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-fernet-keys\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502598 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502630 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-scripts\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502657 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-combined-ca-bundle\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502678 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502697 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7vbd\" (UniqueName: \"kubernetes.io/projected/edd90323-75fd-4b14-8cba-b1db7a93c2e2-kube-api-access-m7vbd\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502719 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzfj8\" (UniqueName: \"kubernetes.io/projected/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-kube-api-access-qzfj8\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502734 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-credential-keys\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.511216 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.513599 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-scripts\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.518069 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-fernet-keys\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.519194 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.519797 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.527450 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.527622 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.527688 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-config-data\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.531353 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.533177 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.535787 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.535976 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-9xd8p" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.537964 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-config-data\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.539936 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qjrfx"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.543812 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-credential-keys\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.550336 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-combined-ca-bundle\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.592982 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7vbd\" (UniqueName: \"kubernetes.io/projected/edd90323-75fd-4b14-8cba-b1db7a93c2e2-kube-api-access-m7vbd\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.610354 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-dwwcb"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.611709 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.626794 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzfj8\" (UniqueName: \"kubernetes.io/projected/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-kube-api-access-qzfj8\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.629537 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-config-data\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.629587 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-combined-ca-bundle\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.629717 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n5mf\" (UniqueName: \"kubernetes.io/projected/dda9fdbc-ce81-4e63-b32f-733379d893d4-kube-api-access-8n5mf\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.637419 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.637609 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.637706 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cnbtl" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.694322 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dwwcb"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.731329 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n5mf\" (UniqueName: \"kubernetes.io/projected/dda9fdbc-ce81-4e63-b32f-733379d893d4-kube-api-access-8n5mf\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.731392 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx7rs\" (UniqueName: \"kubernetes.io/projected/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-kube-api-access-kx7rs\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.731489 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-config-data\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.731506 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-combined-ca-bundle\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.731524 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-combined-ca-bundle\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.731552 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-config\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.750097 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.756541 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-s8hs9"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.758050 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.761866 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.770311 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-r9qmf" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.770696 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.795304 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n5mf\" (UniqueName: \"kubernetes.io/projected/dda9fdbc-ce81-4e63-b32f-733379d893d4-kube-api-access-8n5mf\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.815117 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-config-data\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.816671 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-s8hs9"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.838884 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-db-sync-config-data\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.838999 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx7rs\" (UniqueName: \"kubernetes.io/projected/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-kube-api-access-kx7rs\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839028 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-scripts\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839084 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-config-data\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839137 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-combined-ca-bundle\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839158 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szgd4\" (UniqueName: \"kubernetes.io/projected/feecd29d-1d64-47f4-a1af-e634b7d87f3a-kube-api-access-szgd4\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839179 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feecd29d-1d64-47f4-a1af-e634b7d87f3a-etc-machine-id\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839200 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-combined-ca-bundle\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839222 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-config\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.864137 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-combined-ca-bundle\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.864950 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-config\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.865071 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-combined-ca-bundle\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.897486 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx7rs\" (UniqueName: \"kubernetes.io/projected/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-kube-api-access-kx7rs\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.924087 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.941030 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-db-sync-config-data\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.941198 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-scripts\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.941291 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-config-data\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.941384 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szgd4\" (UniqueName: \"kubernetes.io/projected/feecd29d-1d64-47f4-a1af-e634b7d87f3a-kube-api-access-szgd4\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.941425 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feecd29d-1d64-47f4-a1af-e634b7d87f3a-etc-machine-id\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.941452 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-combined-ca-bundle\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.945372 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feecd29d-1d64-47f4-a1af-e634b7d87f3a-etc-machine-id\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.946996 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-scripts\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.954841 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-tgjz6"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.955620 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-config-data\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.957233 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-combined-ca-bundle\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.965987 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-db-sync-config-data\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:46.999134 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szgd4\" (UniqueName: \"kubernetes.io/projected/feecd29d-1d64-47f4-a1af-e634b7d87f3a-kube-api-access-szgd4\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.015137 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8h4kr"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.016756 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.045699 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.046577 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.046771 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fpld6" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.057811 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.061645 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8h4kr"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.100895 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.115121 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-9w9wm"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.146371 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f788adab-3912-43da-869e-2450d65b761f-logs\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.146572 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-config-data\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.146600 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5n2d\" (UniqueName: \"kubernetes.io/projected/f788adab-3912-43da-869e-2450d65b761f-kube-api-access-k5n2d\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.146664 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-scripts\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.146687 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-combined-ca-bundle\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.156041 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.160642 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fl96f" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.160828 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.178950 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.238106 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9w9wm"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.249361 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbf7x"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.251851 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-config-data\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.251971 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5n2d\" (UniqueName: \"kubernetes.io/projected/f788adab-3912-43da-869e-2450d65b761f-kube-api-access-k5n2d\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.252088 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-combined-ca-bundle\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.252217 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-scripts\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.252332 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-combined-ca-bundle\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.252426 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lcxh\" (UniqueName: \"kubernetes.io/projected/2ba5eedf-14b8-45ce-b738-e41a6daff299-kube-api-access-9lcxh\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.252511 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f788adab-3912-43da-869e-2450d65b761f-logs\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.252659 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-db-sync-config-data\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.257891 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-config-data\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.258536 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f788adab-3912-43da-869e-2450d65b761f-logs\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.263018 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-scripts\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.263332 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbf7x"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.264524 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.289190 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-combined-ca-bundle\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.295013 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5n2d\" (UniqueName: \"kubernetes.io/projected/f788adab-3912-43da-869e-2450d65b761f-kube-api-access-k5n2d\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355074 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lcxh\" (UniqueName: \"kubernetes.io/projected/2ba5eedf-14b8-45ce-b738-e41a6daff299-kube-api-access-9lcxh\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355194 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-db-sync-config-data\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355223 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355243 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355304 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355386 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6nkv\" (UniqueName: \"kubernetes.io/projected/8ab3789a-5136-46f9-94bb-ab43720d0723-kube-api-access-g6nkv\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355417 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-config\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355459 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-combined-ca-bundle\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355525 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.358763 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-combined-ca-bundle\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.362267 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-db-sync-config-data\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.405753 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.409542 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lcxh\" (UniqueName: \"kubernetes.io/projected/2ba5eedf-14b8-45ce-b738-e41a6daff299-kube-api-access-9lcxh\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.436756 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.440923 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.445525 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.447536 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.459384 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.459478 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6nkv\" (UniqueName: \"kubernetes.io/projected/8ab3789a-5136-46f9-94bb-ab43720d0723-kube-api-access-g6nkv\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.459509 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-config\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.459579 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.459711 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.459735 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.461281 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-config\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.461609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.461828 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.462361 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.464159 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.495696 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6nkv\" (UniqueName: \"kubernetes.io/projected/8ab3789a-5136-46f9-94bb-ab43720d0723-kube-api-access-g6nkv\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.509459 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.527467 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.531679 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.535842 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.536082 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.536226 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jbtcd" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.540813 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.578425 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-scripts\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.578515 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.578817 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.578855 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s629\" (UniqueName: \"kubernetes.io/projected/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-kube-api-access-4s629\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.578917 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-run-httpd\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.578962 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-log-httpd\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.579067 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-config-data\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.622805 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.624514 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681643 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-scripts\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681692 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681754 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681775 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681845 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681866 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s629\" (UniqueName: \"kubernetes.io/projected/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-kube-api-access-4s629\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681894 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-run-httpd\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681914 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z22wb\" (UniqueName: \"kubernetes.io/projected/94d84421-da66-4847-bfcc-f2fc38d072e7-kube-api-access-z22wb\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681936 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-log-httpd\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681964 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681985 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-logs\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.682015 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-config-data\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.682055 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.682072 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.686639 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-run-httpd\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.687053 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-log-httpd\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.692384 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.696835 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-config-data\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.699921 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-scripts\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.700876 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.755419 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s629\" (UniqueName: \"kubernetes.io/projected/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-kube-api-access-4s629\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: W0128 18:36:47.769422 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedd90323_75fd_4b14_8cba_b1db7a93c2e2.slice/crio-9ad1aa8387f0d8b5f62df594e67c9ee70778bda664a9150cecd6885e74d02194 WatchSource:0}: Error finding container 9ad1aa8387f0d8b5f62df594e67c9ee70778bda664a9150cecd6885e74d02194: Status 404 returned error can't find the container with id 9ad1aa8387f0d8b5f62df594e67c9ee70778bda664a9150cecd6885e74d02194 Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.777798 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-tgjz6"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785386 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785445 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-logs\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785521 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785545 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785635 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785664 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785776 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z22wb\" (UniqueName: \"kubernetes.io/projected/94d84421-da66-4847-bfcc-f2fc38d072e7-kube-api-access-z22wb\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.787049 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-logs\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.787523 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.792566 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.798852 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.805398 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.808310 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.813910 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.829096 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.830237 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z22wb\" (UniqueName: \"kubernetes.io/projected/94d84421-da66-4847-bfcc-f2fc38d072e7-kube-api-access-z22wb\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.839688 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.856633 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.856685 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2d6568af50c46d048a9023d9ac84db4baa0cf8b023fb9ef6c59e622b024bcc77/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.867989 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.888809 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.888889 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.888917 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4q8k\" (UniqueName: \"kubernetes.io/projected/ff279d8d-4c4e-4bdc-a880-7a739d15999c-kube-api-access-d4q8k\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.889115 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.889296 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.889380 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.889481 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-logs\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.922769 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.992805 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.992867 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.992921 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-logs\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.993013 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.993054 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.993082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4q8k\" (UniqueName: \"kubernetes.io/projected/ff279d8d-4c4e-4bdc-a880-7a739d15999c-kube-api-access-d4q8k\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.993202 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.993831 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-logs\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.997656 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.000928 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.002010 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.002063 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d04256428a5045d3b55ec61489edb632decdf9f4666f3e6952b725d307784bb2/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.010092 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.011238 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.035977 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4q8k\" (UniqueName: \"kubernetes.io/projected/ff279d8d-4c4e-4bdc-a880-7a739d15999c-kube-api-access-d4q8k\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.112047 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.119732 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qjrfx"] Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.143906 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.263907 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerName="dnsmasq-dns" containerID="cri-o://a6147749e550936512902312ff84cb22311c72f650197306797ae78d53b6737d" gracePeriod=10 Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.264222 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" event={"ID":"edd90323-75fd-4b14-8cba-b1db7a93c2e2","Type":"ContainerStarted","Data":"9ad1aa8387f0d8b5f62df594e67c9ee70778bda664a9150cecd6885e74d02194"} Jan 28 18:36:48 crc kubenswrapper[4985]: E0128 18:36:48.372874 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:48 crc kubenswrapper[4985]: E0128 18:36:48.378583 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0fb3881_97de_41ce_a664_51e5d4dea3e1.slice/crio-a6147749e550936512902312ff84cb22311c72f650197306797ae78d53b6737d.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.446422 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.626390 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dwwcb"] Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.646307 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-s8hs9"] Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.661573 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h27v9"] Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.130443 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9w9wm"] Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.141340 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8h4kr"] Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.155393 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.182009 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbf7x"] Jan 28 18:36:49 crc kubenswrapper[4985]: W0128 18:36:49.270858 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf788adab_3912_43da_869e_2450d65b761f.slice/crio-a3c254f828427ba506d4802902a1b02512f0a07f8294c8db3817864021b8fd0c WatchSource:0}: Error finding container a3c254f828427ba506d4802902a1b02512f0a07f8294c8db3817864021b8fd0c: Status 404 returned error can't find the container with id a3c254f828427ba506d4802902a1b02512f0a07f8294c8db3817864021b8fd0c Jan 28 18:36:49 crc kubenswrapper[4985]: W0128 18:36:49.275456 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d1d02ed_9b38_404a_8926_9d4aaf7bab57.slice/crio-3ae1387fe5106b01146f4fc344eb6732aa4c0dba8627d7a78e6bf597fe2799b6 WatchSource:0}: Error finding container 3ae1387fe5106b01146f4fc344eb6732aa4c0dba8627d7a78e6bf597fe2799b6: Status 404 returned error can't find the container with id 3ae1387fe5106b01146f4fc344eb6732aa4c0dba8627d7a78e6bf597fe2799b6 Jan 28 18:36:49 crc kubenswrapper[4985]: W0128 18:36:49.279746 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ab3789a_5136_46f9_94bb_ab43720d0723.slice/crio-bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199 WatchSource:0}: Error finding container bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199: Status 404 returned error can't find the container with id bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199 Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.330657 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s8hs9" event={"ID":"feecd29d-1d64-47f4-a1af-e634b7d87f3a","Type":"ContainerStarted","Data":"1b5ced815ed25f34faa5ff921cdb8509638b39e75db318b0ce2521c26d4d3829"} Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.354892 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h27v9" event={"ID":"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e","Type":"ContainerStarted","Data":"f594c9e7d10fa6181857cdca65cc9afd3cc6e7a2e73bb7a606297e4b8c0e60db"} Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.408718 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" event={"ID":"edd90323-75fd-4b14-8cba-b1db7a93c2e2","Type":"ContainerStarted","Data":"0a9323753e3370f5deb9e3fe12803761651ac2f2ff4a5d5c2eb6c176ae9f5e26"} Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.409000 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" podUID="edd90323-75fd-4b14-8cba-b1db7a93c2e2" containerName="init" containerID="cri-o://0a9323753e3370f5deb9e3fe12803761651ac2f2ff4a5d5c2eb6c176ae9f5e26" gracePeriod=10 Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.440817 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qjrfx" event={"ID":"dda9fdbc-ce81-4e63-b32f-733379d893d4","Type":"ContainerStarted","Data":"29e494db6715043d1dade09c32717d476d44c5754f6d809807167b425de76172"} Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.465688 4985 generic.go:334] "Generic (PLEG): container finished" podID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerID="a6147749e550936512902312ff84cb22311c72f650197306797ae78d53b6737d" exitCode=0 Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.466064 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" event={"ID":"f0fb3881-97de-41ce-a664-51e5d4dea3e1","Type":"ContainerDied","Data":"a6147749e550936512902312ff84cb22311c72f650197306797ae78d53b6737d"} Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.469980 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dwwcb" event={"ID":"b64f0d6c-55b7-4eac-85f6-e78b581cbebc","Type":"ContainerStarted","Data":"94e9ea7881e540161402fe0b16a42aca0004dbafe8de2259a73da5d4a537b2b5"} Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.670152 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.008557 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.123280 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.189586 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:50 crc kubenswrapper[4985]: E0128 18:36:50.200045 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.325465 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.336965 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxmt9\" (UniqueName: \"kubernetes.io/projected/f0fb3881-97de-41ce-a664-51e5d4dea3e1-kube-api-access-pxmt9\") pod \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.337135 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-swift-storage-0\") pod \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.337232 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-config\") pod \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.337290 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-sb\") pod \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.337437 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-svc\") pod \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.337533 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-nb\") pod \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.344895 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0fb3881-97de-41ce-a664-51e5d4dea3e1-kube-api-access-pxmt9" (OuterVolumeSpecName: "kube-api-access-pxmt9") pod "f0fb3881-97de-41ce-a664-51e5d4dea3e1" (UID: "f0fb3881-97de-41ce-a664-51e5d4dea3e1"). InnerVolumeSpecName "kube-api-access-pxmt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.441117 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxmt9\" (UniqueName: \"kubernetes.io/projected/f0fb3881-97de-41ce-a664-51e5d4dea3e1-kube-api-access-pxmt9\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.518464 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-config" (OuterVolumeSpecName: "config") pod "f0fb3881-97de-41ce-a664-51e5d4dea3e1" (UID: "f0fb3881-97de-41ce-a664-51e5d4dea3e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.525691 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0fb3881-97de-41ce-a664-51e5d4dea3e1" (UID: "f0fb3881-97de-41ce-a664-51e5d4dea3e1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.543782 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.543813 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.572354 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mbtp6"] Jan 28 18:36:50 crc kubenswrapper[4985]: E0128 18:36:50.572851 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerName="init" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.572867 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerName="init" Jan 28 18:36:50 crc kubenswrapper[4985]: E0128 18:36:50.572909 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerName="dnsmasq-dns" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.572915 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerName="dnsmasq-dns" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.573103 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerName="dnsmasq-dns" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.575491 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.582885 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dwwcb" event={"ID":"b64f0d6c-55b7-4eac-85f6-e78b581cbebc","Type":"ContainerStarted","Data":"461350d6795ff69f1fd203af637d4dd96dfc2a84c72f138630ab057e524c2df1"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.615473 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerStarted","Data":"3ae1387fe5106b01146f4fc344eb6732aa4c0dba8627d7a78e6bf597fe2799b6"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.616831 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0fb3881-97de-41ce-a664-51e5d4dea3e1" (UID: "f0fb3881-97de-41ce-a664-51e5d4dea3e1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.626895 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbtp6"] Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.629942 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" event={"ID":"8ab3789a-5136-46f9-94bb-ab43720d0723","Type":"ContainerStarted","Data":"bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.645042 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f0fb3881-97de-41ce-a664-51e5d4dea3e1" (UID: "f0fb3881-97de-41ce-a664-51e5d4dea3e1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.645655 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qll99\" (UniqueName: \"kubernetes.io/projected/1ebe025a-cece-4723-928f-b6649ea27040-kube-api-access-qll99\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.645743 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-utilities\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.645979 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-catalog-content\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.646103 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.646117 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.668742 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" event={"ID":"f0fb3881-97de-41ce-a664-51e5d4dea3e1","Type":"ContainerDied","Data":"f74f0bb6300abf03a41f5514522429abdf0847f34f1d56df2ed73e73e25973ab"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.668816 4985 scope.go:117] "RemoveContainer" containerID="a6147749e550936512902312ff84cb22311c72f650197306797ae78d53b6737d" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.669125 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.669512 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0fb3881-97de-41ce-a664-51e5d4dea3e1" (UID: "f0fb3881-97de-41ce-a664-51e5d4dea3e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.681855 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h27v9" event={"ID":"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e","Type":"ContainerStarted","Data":"12e6aacaa8527f36ddf49eb87d558411736fa67a95ae92f557207b934aed3337"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.695589 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-dwwcb" podStartSLOduration=4.69556422 podStartE2EDuration="4.69556422s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:50.628111366 +0000 UTC m=+1421.454674187" watchObservedRunningTime="2026-01-28 18:36:50.69556422 +0000 UTC m=+1421.522127041" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.716710 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9w9wm" event={"ID":"2ba5eedf-14b8-45ce-b738-e41a6daff299","Type":"ContainerStarted","Data":"d797c3ffe3dba6a95e4e6284ce4ebd9bc07a285808da1bdf5575d32b4671bc8a"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.720607 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-h27v9" podStartSLOduration=4.720581186 podStartE2EDuration="4.720581186s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:50.702866566 +0000 UTC m=+1421.529429387" watchObservedRunningTime="2026-01-28 18:36:50.720581186 +0000 UTC m=+1421.547144037" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.724465 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.732673 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8h4kr" event={"ID":"f788adab-3912-43da-869e-2450d65b761f","Type":"ContainerStarted","Data":"a3c254f828427ba506d4802902a1b02512f0a07f8294c8db3817864021b8fd0c"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.749566 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qll99\" (UniqueName: \"kubernetes.io/projected/1ebe025a-cece-4723-928f-b6649ea27040-kube-api-access-qll99\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.749818 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-utilities\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.750159 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-catalog-content\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.750341 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.751457 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-utilities\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.752396 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-catalog-content\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.787103 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.797645 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qll99\" (UniqueName: \"kubernetes.io/projected/1ebe025a-cece-4723-928f-b6649ea27040-kube-api-access-qll99\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.817928 4985 generic.go:334] "Generic (PLEG): container finished" podID="edd90323-75fd-4b14-8cba-b1db7a93c2e2" containerID="0a9323753e3370f5deb9e3fe12803761651ac2f2ff4a5d5c2eb6c176ae9f5e26" exitCode=0 Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.818086 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" event={"ID":"edd90323-75fd-4b14-8cba-b1db7a93c2e2","Type":"ContainerDied","Data":"0a9323753e3370f5deb9e3fe12803761651ac2f2ff4a5d5c2eb6c176ae9f5e26"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.838561 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff279d8d-4c4e-4bdc-a880-7a739d15999c","Type":"ContainerStarted","Data":"901d4da2ea774977403413c52d844a7d397bdd9df889717b5e5f413275ab1407"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.852449 4985 scope.go:117] "RemoveContainer" containerID="b25b93afe5c0b9bcdcecf1bc670732171d335e6245638df0593c3602ff20f598" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.853726 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-sb\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.853861 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.853970 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7vbd\" (UniqueName: \"kubernetes.io/projected/edd90323-75fd-4b14-8cba-b1db7a93c2e2-kube-api-access-m7vbd\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.853997 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-svc\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.854104 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-swift-storage-0\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.854123 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-nb\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.858271 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d356801-0ed0-4343-87a9-29d23453d621","Type":"ContainerStarted","Data":"783d0e39177fc6f57441bbe975e76729d0ab9a44d7fd2176639c567f4c481bbf"} Jan 28 18:36:50 crc kubenswrapper[4985]: E0128 18:36:50.866573 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.879502 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94d84421-da66-4847-bfcc-f2fc38d072e7","Type":"ContainerStarted","Data":"4d27cc9d7c9abb101a5028da312f83cf7530369c6dbbf15f3f10f537bfca14e2"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.900062 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd90323-75fd-4b14-8cba-b1db7a93c2e2-kube-api-access-m7vbd" (OuterVolumeSpecName: "kube-api-access-m7vbd") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "kube-api-access-m7vbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.944924 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.945261 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.948924 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.956053 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config" (OuterVolumeSpecName: "config") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.956493 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.957772 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.957800 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.957810 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.957821 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7vbd\" (UniqueName: \"kubernetes.io/projected/edd90323-75fd-4b14-8cba-b1db7a93c2e2-kube-api-access-m7vbd\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: W0128 18:36:50.958710 4985 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/edd90323-75fd-4b14-8cba-b1db7a93c2e2/volumes/kubernetes.io~configmap/config Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.958726 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config" (OuterVolumeSpecName: "config") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.982451 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.035078 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.046896 4985 scope.go:117] "RemoveContainer" containerID="0a9323753e3370f5deb9e3fe12803761651ac2f2ff4a5d5c2eb6c176ae9f5e26" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.061559 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.061588 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.125455 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rtvmd"] Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.139954 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rtvmd"] Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.316793 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" path="/var/lib/kubelet/pods/f0fb3881-97de-41ce-a664-51e5d4dea3e1/volumes" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.769376 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbtp6"] Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.956787 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.958203 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8fg44"] Jan 28 18:36:51 crc kubenswrapper[4985]: E0128 18:36:51.958585 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd90323-75fd-4b14-8cba-b1db7a93c2e2" containerName="init" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.958597 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd90323-75fd-4b14-8cba-b1db7a93c2e2" containerName="init" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.958859 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd90323-75fd-4b14-8cba-b1db7a93c2e2" containerName="init" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.964611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" event={"ID":"edd90323-75fd-4b14-8cba-b1db7a93c2e2","Type":"ContainerDied","Data":"9ad1aa8387f0d8b5f62df594e67c9ee70778bda664a9150cecd6885e74d02194"} Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.964749 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.996558 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8fg44"] Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.007063 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff279d8d-4c4e-4bdc-a880-7a739d15999c","Type":"ContainerStarted","Data":"a3a6974dd2a2d5d592eec4b16a00f394ceced6b18c1c368fe6111cc253be6e71"} Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.018566 4985 generic.go:334] "Generic (PLEG): container finished" podID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerID="f090f667713f31e333608c60874aca9b174e0dc6eb4e52fb2779980ecf229992" exitCode=0 Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.018648 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" event={"ID":"8ab3789a-5136-46f9-94bb-ab43720d0723","Type":"ContainerDied","Data":"f090f667713f31e333608c60874aca9b174e0dc6eb4e52fb2779980ecf229992"} Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.056211 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerStarted","Data":"cb6d06c38f976feb1cb400142c94c846180c10a5200e7df25e3c5053c66cb609"} Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.088170 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94d84421-da66-4847-bfcc-f2fc38d072e7","Type":"ContainerStarted","Data":"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c"} Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.096356 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-tgjz6"] Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.100567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-catalog-content\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.100681 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-utilities\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.100777 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt55m\" (UniqueName: \"kubernetes.io/projected/493defdf-169c-4278-b370-69068ec73439-kube-api-access-dt55m\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.216744 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-utilities\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.217052 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt55m\" (UniqueName: \"kubernetes.io/projected/493defdf-169c-4278-b370-69068ec73439-kube-api-access-dt55m\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.217123 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-catalog-content\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.217271 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-utilities\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.236818 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-tgjz6"] Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.252975 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-catalog-content\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.386559 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt55m\" (UniqueName: \"kubernetes.io/projected/493defdf-169c-4278-b370-69068ec73439-kube-api-access-dt55m\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.659383 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.217105 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" event={"ID":"8ab3789a-5136-46f9-94bb-ab43720d0723","Type":"ContainerStarted","Data":"16a274b711b7c65f8bac3402c7e48f9e20237b3e266544fb803379dddb341a3e"} Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.218768 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.244301 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" podStartSLOduration=6.244282296 podStartE2EDuration="6.244282296s" podCreationTimestamp="2026-01-28 18:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:53.237741801 +0000 UTC m=+1424.064304622" watchObservedRunningTime="2026-01-28 18:36:53.244282296 +0000 UTC m=+1424.070845117" Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.264828 4985 generic.go:334] "Generic (PLEG): container finished" podID="1ebe025a-cece-4723-928f-b6649ea27040" containerID="c90878479aa212272619165fb9e5e236c18feef83564d0b2ea60daad9b1b13ff" exitCode=0 Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.278106 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-log" containerID="cri-o://2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c" gracePeriod=30 Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.278455 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-httpd" containerID="cri-o://f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a" gracePeriod=30 Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.367707 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd90323-75fd-4b14-8cba-b1db7a93c2e2" path="/var/lib/kubelet/pods/edd90323-75fd-4b14-8cba-b1db7a93c2e2/volumes" Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.368639 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerDied","Data":"c90878479aa212272619165fb9e5e236c18feef83564d0b2ea60daad9b1b13ff"} Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.433335 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.433312363 podStartE2EDuration="7.433312363s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:53.347002856 +0000 UTC m=+1424.173565677" watchObservedRunningTime="2026-01-28 18:36:53.433312363 +0000 UTC m=+1424.259875184" Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.444268 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8fg44"] Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.271241 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.319830 4985 generic.go:334] "Generic (PLEG): container finished" podID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerID="f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a" exitCode=143 Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.319865 4985 generic.go:334] "Generic (PLEG): container finished" podID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerID="2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c" exitCode=143 Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.319990 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.320625 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94d84421-da66-4847-bfcc-f2fc38d072e7","Type":"ContainerDied","Data":"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a"} Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.320675 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94d84421-da66-4847-bfcc-f2fc38d072e7","Type":"ContainerDied","Data":"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c"} Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.320688 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94d84421-da66-4847-bfcc-f2fc38d072e7","Type":"ContainerDied","Data":"4d27cc9d7c9abb101a5028da312f83cf7530369c6dbbf15f3f10f537bfca14e2"} Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.320704 4985 scope.go:117] "RemoveContainer" containerID="f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.326383 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff279d8d-4c4e-4bdc-a880-7a739d15999c","Type":"ContainerStarted","Data":"d9a7fbe77569a9cccca192f6b208ed4293873e8e329ca9372d198c908395de7f"} Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.326444 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-log" containerID="cri-o://a3a6974dd2a2d5d592eec4b16a00f394ceced6b18c1c368fe6111cc253be6e71" gracePeriod=30 Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.326548 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-httpd" containerID="cri-o://d9a7fbe77569a9cccca192f6b208ed4293873e8e329ca9372d198c908395de7f" gracePeriod=30 Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.333416 4985 generic.go:334] "Generic (PLEG): container finished" podID="493defdf-169c-4278-b370-69068ec73439" containerID="bb466fa56833f63c962ba1cccca2fbc2223625dc1bb00585f9df84071452e8e0" exitCode=0 Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.333815 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerDied","Data":"bb466fa56833f63c962ba1cccca2fbc2223625dc1bb00585f9df84071452e8e0"} Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.333859 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerStarted","Data":"80ceba888693469af3d53c546cb7c4eba0040a2f5c19424d7894edf743d991ac"} Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.353701 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.353672547 podStartE2EDuration="8.353672547s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:54.343588562 +0000 UTC m=+1425.170151383" watchObservedRunningTime="2026-01-28 18:36:54.353672547 +0000 UTC m=+1425.180235368" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.366306 4985 scope.go:117] "RemoveContainer" containerID="2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.428547 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-scripts\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.428683 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.428739 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-config-data\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.428832 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-logs\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.428969 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-combined-ca-bundle\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.429070 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z22wb\" (UniqueName: \"kubernetes.io/projected/94d84421-da66-4847-bfcc-f2fc38d072e7-kube-api-access-z22wb\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.429115 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-httpd-run\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.430611 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.431382 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-logs" (OuterVolumeSpecName: "logs") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.437392 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-scripts" (OuterVolumeSpecName: "scripts") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.463038 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d84421-da66-4847-bfcc-f2fc38d072e7-kube-api-access-z22wb" (OuterVolumeSpecName: "kube-api-access-z22wb") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "kube-api-access-z22wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.464573 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f" (OuterVolumeSpecName: "glance") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "pvc-a28b8b70-fd49-47a9-9731-34913060b77f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.501126 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.504072 4985 scope.go:117] "RemoveContainer" containerID="f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a" Jan 28 18:36:54 crc kubenswrapper[4985]: E0128 18:36:54.507068 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a\": container with ID starting with f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a not found: ID does not exist" containerID="f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.507131 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a"} err="failed to get container status \"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a\": rpc error: code = NotFound desc = could not find container \"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a\": container with ID starting with f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a not found: ID does not exist" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.507169 4985 scope.go:117] "RemoveContainer" containerID="2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c" Jan 28 18:36:54 crc kubenswrapper[4985]: E0128 18:36:54.508215 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c\": container with ID starting with 2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c not found: ID does not exist" containerID="2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.508240 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c"} err="failed to get container status \"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c\": rpc error: code = NotFound desc = could not find container \"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c\": container with ID starting with 2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c not found: ID does not exist" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.508272 4985 scope.go:117] "RemoveContainer" containerID="f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.509481 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a"} err="failed to get container status \"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a\": rpc error: code = NotFound desc = could not find container \"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a\": container with ID starting with f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a not found: ID does not exist" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.509506 4985 scope.go:117] "RemoveContainer" containerID="2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.510995 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c"} err="failed to get container status \"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c\": rpc error: code = NotFound desc = could not find container \"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c\": container with ID starting with 2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c not found: ID does not exist" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.536691 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.536928 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z22wb\" (UniqueName: \"kubernetes.io/projected/94d84421-da66-4847-bfcc-f2fc38d072e7-kube-api-access-z22wb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.537004 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.537097 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.537228 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") on node \"crc\" " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.537350 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.558994 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-config-data" (OuterVolumeSpecName: "config-data") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.580718 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.580868 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a28b8b70-fd49-47a9-9731-34913060b77f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f") on node "crc" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.642969 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.643003 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.759396 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.772585 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.805866 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:54 crc kubenswrapper[4985]: E0128 18:36:54.807701 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-log" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.807751 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-log" Jan 28 18:36:54 crc kubenswrapper[4985]: E0128 18:36:54.807845 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-httpd" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.807856 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-httpd" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.808438 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-httpd" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.808473 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-log" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.826844 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.844217 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.851613 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.884629 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.884912 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.884995 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.885024 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.885071 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh6l7\" (UniqueName: \"kubernetes.io/projected/8c2c9b96-2033-4221-8667-e24507c76269-kube-api-access-nh6l7\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.885102 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.885340 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.885372 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-logs\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.968735 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988330 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988400 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-logs\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988488 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988509 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988545 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988568 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988593 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh6l7\" (UniqueName: \"kubernetes.io/projected/8c2c9b96-2033-4221-8667-e24507c76269-kube-api-access-nh6l7\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988608 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.996522 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.998924 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-logs\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.999405 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.000158 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.003230 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.008074 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.011972 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.012015 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2d6568af50c46d048a9023d9ac84db4baa0cf8b023fb9ef6c59e622b024bcc77/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.021960 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh6l7\" (UniqueName: \"kubernetes.io/projected/8c2c9b96-2033-4221-8667-e24507c76269-kube-api-access-nh6l7\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.067414 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.199729 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.311749 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" path="/var/lib/kubelet/pods/94d84421-da66-4847-bfcc-f2fc38d072e7/volumes" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.376893 4985 generic.go:334] "Generic (PLEG): container finished" podID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerID="d9a7fbe77569a9cccca192f6b208ed4293873e8e329ca9372d198c908395de7f" exitCode=0 Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.376929 4985 generic.go:334] "Generic (PLEG): container finished" podID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerID="a3a6974dd2a2d5d592eec4b16a00f394ceced6b18c1c368fe6111cc253be6e71" exitCode=143 Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.376972 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff279d8d-4c4e-4bdc-a880-7a739d15999c","Type":"ContainerDied","Data":"d9a7fbe77569a9cccca192f6b208ed4293873e8e329ca9372d198c908395de7f"} Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.376999 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff279d8d-4c4e-4bdc-a880-7a739d15999c","Type":"ContainerDied","Data":"a3a6974dd2a2d5d592eec4b16a00f394ceced6b18c1c368fe6111cc253be6e71"} Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.395767 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerStarted","Data":"ac4c636c19c5a93172c99e41217794568a75dad0ad348a3d4022d6d7bcdfe984"} Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.636791 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704172 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-combined-ca-bundle\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704270 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-httpd-run\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704457 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704486 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-logs\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704638 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-scripts\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704742 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4q8k\" (UniqueName: \"kubernetes.io/projected/ff279d8d-4c4e-4bdc-a880-7a739d15999c-kube-api-access-d4q8k\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704764 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-config-data\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.705627 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-logs" (OuterVolumeSpecName: "logs") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.705867 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.736657 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff279d8d-4c4e-4bdc-a880-7a739d15999c-kube-api-access-d4q8k" (OuterVolumeSpecName: "kube-api-access-d4q8k") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "kube-api-access-d4q8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.760440 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-scripts" (OuterVolumeSpecName: "scripts") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.771154 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc" (OuterVolumeSpecName: "glance") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "pvc-515c3b80-2464-4146-928c-cf9de6a379dc". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.800058 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.808134 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.808184 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") on node \"crc\" " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.808198 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.808206 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.808217 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4q8k\" (UniqueName: \"kubernetes.io/projected/ff279d8d-4c4e-4bdc-a880-7a739d15999c-kube-api-access-d4q8k\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.808225 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.831959 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-config-data" (OuterVolumeSpecName: "config-data") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.838274 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.838414 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-515c3b80-2464-4146-928c-cf9de6a379dc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc") on node "crc" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.911679 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.911716 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.924668 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.407275 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff279d8d-4c4e-4bdc-a880-7a739d15999c","Type":"ContainerDied","Data":"901d4da2ea774977403413c52d844a7d397bdd9df889717b5e5f413275ab1407"} Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.408380 4985 scope.go:117] "RemoveContainer" containerID="d9a7fbe77569a9cccca192f6b208ed4293873e8e329ca9372d198c908395de7f" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.408442 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.416919 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c2c9b96-2033-4221-8667-e24507c76269","Type":"ContainerStarted","Data":"43d735c182cbb81ec5017199eb78a2029759022896fdabfe1470a42d01bd6b7b"} Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.421000 4985 generic.go:334] "Generic (PLEG): container finished" podID="1ebe025a-cece-4723-928f-b6649ea27040" containerID="ac4c636c19c5a93172c99e41217794568a75dad0ad348a3d4022d6d7bcdfe984" exitCode=0 Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.421130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerDied","Data":"ac4c636c19c5a93172c99e41217794568a75dad0ad348a3d4022d6d7bcdfe984"} Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.510394 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.525883 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.536527 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:56 crc kubenswrapper[4985]: E0128 18:36:56.537232 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-log" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.537312 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-log" Jan 28 18:36:56 crc kubenswrapper[4985]: E0128 18:36:56.537417 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-httpd" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.537473 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-httpd" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.537775 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-log" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.537864 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-httpd" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.539167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.543332 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.543388 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.548689 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629747 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-logs\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629834 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629856 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-scripts\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629910 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629935 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629953 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629981 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-config-data\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.630007 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsqtx\" (UniqueName: \"kubernetes.io/projected/183853eb-591f-4859-9824-550b76c6f115-kube-api-access-vsqtx\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.687769 4985 scope.go:117] "RemoveContainer" containerID="a3a6974dd2a2d5d592eec4b16a00f394ceced6b18c1c368fe6111cc253be6e71" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732105 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-logs\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732237 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732291 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-scripts\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732333 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732371 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732394 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732428 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-config-data\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732462 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsqtx\" (UniqueName: \"kubernetes.io/projected/183853eb-591f-4859-9824-550b76c6f115-kube-api-access-vsqtx\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.733581 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.734213 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-logs\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.739861 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.739895 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d04256428a5045d3b55ec61489edb632decdf9f4666f3e6952b725d307784bb2/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.740008 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.740512 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.740782 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-scripts\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.741822 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-config-data\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.756438 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsqtx\" (UniqueName: \"kubernetes.io/projected/183853eb-591f-4859-9824-550b76c6f115-kube-api-access-vsqtx\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.805892 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.868956 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:57 crc kubenswrapper[4985]: I0128 18:36:57.289978 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" path="/var/lib/kubelet/pods/ff279d8d-4c4e-4bdc-a880-7a739d15999c/volumes" Jan 28 18:36:57 crc kubenswrapper[4985]: W0128 18:36:57.453012 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod183853eb_591f_4859_9824_550b76c6f115.slice/crio-3032950d6605333705d222c5cf7752eabb2ff3aa233f4490427968658cbe487f WatchSource:0}: Error finding container 3032950d6605333705d222c5cf7752eabb2ff3aa233f4490427968658cbe487f: Status 404 returned error can't find the container with id 3032950d6605333705d222c5cf7752eabb2ff3aa233f4490427968658cbe487f Jan 28 18:36:57 crc kubenswrapper[4985]: I0128 18:36:57.454850 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:57 crc kubenswrapper[4985]: I0128 18:36:57.628492 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:57 crc kubenswrapper[4985]: I0128 18:36:57.750810 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-f4mq4"] Jan 28 18:36:57 crc kubenswrapper[4985]: I0128 18:36:57.751269 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" containerID="cri-o://7bf8dbd2dcbc5b0a1855cc79c5970c28806a8595e366298bec9e80900e68f659" gracePeriod=10 Jan 28 18:36:58 crc kubenswrapper[4985]: I0128 18:36:58.461420 4985 generic.go:334] "Generic (PLEG): container finished" podID="3d356801-0ed0-4343-87a9-29d23453d621" containerID="783d0e39177fc6f57441bbe975e76729d0ab9a44d7fd2176639c567f4c481bbf" exitCode=0 Jan 28 18:36:58 crc kubenswrapper[4985]: I0128 18:36:58.461571 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d356801-0ed0-4343-87a9-29d23453d621","Type":"ContainerDied","Data":"783d0e39177fc6f57441bbe975e76729d0ab9a44d7fd2176639c567f4c481bbf"} Jan 28 18:36:58 crc kubenswrapper[4985]: I0128 18:36:58.469209 4985 generic.go:334] "Generic (PLEG): container finished" podID="fa80be1e-734c-44bc-a957-137332ecd58a" containerID="7bf8dbd2dcbc5b0a1855cc79c5970c28806a8595e366298bec9e80900e68f659" exitCode=0 Jan 28 18:36:58 crc kubenswrapper[4985]: I0128 18:36:58.470155 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" event={"ID":"fa80be1e-734c-44bc-a957-137332ecd58a","Type":"ContainerDied","Data":"7bf8dbd2dcbc5b0a1855cc79c5970c28806a8595e366298bec9e80900e68f659"} Jan 28 18:36:58 crc kubenswrapper[4985]: I0128 18:36:58.473654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"183853eb-591f-4859-9824-550b76c6f115","Type":"ContainerStarted","Data":"3032950d6605333705d222c5cf7752eabb2ff3aa233f4490427968658cbe487f"} Jan 28 18:36:58 crc kubenswrapper[4985]: I0128 18:36:58.477889 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c2c9b96-2033-4221-8667-e24507c76269","Type":"ContainerStarted","Data":"c1278cfba933f75936a9894cfaa710f2d276954aafea6a97d46314226d60c19f"} Jan 28 18:37:00 crc kubenswrapper[4985]: I0128 18:37:00.537038 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Jan 28 18:37:01 crc kubenswrapper[4985]: E0128 18:37:01.196464 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:37:02 crc kubenswrapper[4985]: I0128 18:37:02.532453 4985 generic.go:334] "Generic (PLEG): container finished" podID="32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" containerID="12e6aacaa8527f36ddf49eb87d558411736fa67a95ae92f557207b934aed3337" exitCode=0 Jan 28 18:37:02 crc kubenswrapper[4985]: I0128 18:37:02.532537 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h27v9" event={"ID":"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e","Type":"ContainerDied","Data":"12e6aacaa8527f36ddf49eb87d558411736fa67a95ae92f557207b934aed3337"} Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.123378 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.185072 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-scripts\") pod \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.185157 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-credential-keys\") pod \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.185216 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-config-data\") pod \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.185361 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-combined-ca-bundle\") pod \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.185495 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-fernet-keys\") pod \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.185617 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzfj8\" (UniqueName: \"kubernetes.io/projected/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-kube-api-access-qzfj8\") pod \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " Jan 28 18:37:05 crc kubenswrapper[4985]: E0128 18:37:05.187918 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.194316 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-scripts" (OuterVolumeSpecName: "scripts") pod "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" (UID: "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.196445 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-kube-api-access-qzfj8" (OuterVolumeSpecName: "kube-api-access-qzfj8") pod "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" (UID: "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e"). InnerVolumeSpecName "kube-api-access-qzfj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.203926 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" (UID: "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.212771 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" (UID: "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.224958 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" (UID: "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.237377 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-config-data" (OuterVolumeSpecName: "config-data") pod "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" (UID: "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.289023 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.289055 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.289118 4985 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.289130 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzfj8\" (UniqueName: \"kubernetes.io/projected/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-kube-api-access-qzfj8\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.289143 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.289152 4985 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.574407 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h27v9" event={"ID":"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e","Type":"ContainerDied","Data":"f594c9e7d10fa6181857cdca65cc9afd3cc6e7a2e73bb7a606297e4b8c0e60db"} Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.574443 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f594c9e7d10fa6181857cdca65cc9afd3cc6e7a2e73bb7a606297e4b8c0e60db" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.574484 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.221067 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-h27v9"] Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.230815 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-h27v9"] Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.323547 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hlgnm"] Jan 28 18:37:06 crc kubenswrapper[4985]: E0128 18:37:06.324122 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" containerName="keystone-bootstrap" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.324142 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" containerName="keystone-bootstrap" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.324527 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" containerName="keystone-bootstrap" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.325478 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.327614 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.327838 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.328124 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.328317 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g7p4d" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.330328 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.339003 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hlgnm"] Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.417175 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-fernet-keys\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.417325 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmsb8\" (UniqueName: \"kubernetes.io/projected/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-kube-api-access-wmsb8\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.417410 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-scripts\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.417733 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-combined-ca-bundle\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.418009 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-credential-keys\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.418308 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-config-data\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.519864 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-combined-ca-bundle\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.519927 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-credential-keys\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.520001 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-config-data\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.520051 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-fernet-keys\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.520069 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmsb8\" (UniqueName: \"kubernetes.io/projected/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-kube-api-access-wmsb8\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.520098 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-scripts\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.534116 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-scripts\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.534372 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-config-data\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.539287 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-combined-ca-bundle\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.539873 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-credential-keys\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.540378 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-fernet-keys\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.540668 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmsb8\" (UniqueName: \"kubernetes.io/projected/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-kube-api-access-wmsb8\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.641401 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:07 crc kubenswrapper[4985]: I0128 18:37:07.300745 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" path="/var/lib/kubelet/pods/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e/volumes" Jan 28 18:37:10 crc kubenswrapper[4985]: I0128 18:37:10.537344 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Jan 28 18:37:11 crc kubenswrapper[4985]: E0128 18:37:11.524626 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:37:15 crc kubenswrapper[4985]: I0128 18:37:15.538134 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Jan 28 18:37:15 crc kubenswrapper[4985]: I0128 18:37:15.539020 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:37:20 crc kubenswrapper[4985]: I0128 18:37:20.539459 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Jan 28 18:37:22 crc kubenswrapper[4985]: E0128 18:37:22.471889 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Jan 28 18:37:22 crc kubenswrapper[4985]: E0128 18:37:22.472516 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:30MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qll99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mbtp6_openshift-marketplace(1ebe025a-cece-4723-928f-b6649ea27040): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:37:22 crc kubenswrapper[4985]: E0128 18:37:22.474132 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.601055 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.736499 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-config\") pod \"fa80be1e-734c-44bc-a957-137332ecd58a\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.737781 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-sb\") pod \"fa80be1e-734c-44bc-a957-137332ecd58a\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.737947 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-dns-svc\") pod \"fa80be1e-734c-44bc-a957-137332ecd58a\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.738554 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdwqb\" (UniqueName: \"kubernetes.io/projected/fa80be1e-734c-44bc-a957-137332ecd58a-kube-api-access-xdwqb\") pod \"fa80be1e-734c-44bc-a957-137332ecd58a\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.738828 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-nb\") pod \"fa80be1e-734c-44bc-a957-137332ecd58a\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.744056 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa80be1e-734c-44bc-a957-137332ecd58a-kube-api-access-xdwqb" (OuterVolumeSpecName: "kube-api-access-xdwqb") pod "fa80be1e-734c-44bc-a957-137332ecd58a" (UID: "fa80be1e-734c-44bc-a957-137332ecd58a"). InnerVolumeSpecName "kube-api-access-xdwqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.795366 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" event={"ID":"fa80be1e-734c-44bc-a957-137332ecd58a","Type":"ContainerDied","Data":"d7aa5495d851ceb3cfab59b851d20f52e6f54fcefbf4bc770429b29199850e87"} Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.795459 4985 scope.go:117] "RemoveContainer" containerID="7bf8dbd2dcbc5b0a1855cc79c5970c28806a8595e366298bec9e80900e68f659" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.795656 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.800413 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-config" (OuterVolumeSpecName: "config") pod "fa80be1e-734c-44bc-a957-137332ecd58a" (UID: "fa80be1e-734c-44bc-a957-137332ecd58a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.803872 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fa80be1e-734c-44bc-a957-137332ecd58a" (UID: "fa80be1e-734c-44bc-a957-137332ecd58a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.808194 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa80be1e-734c-44bc-a957-137332ecd58a" (UID: "fa80be1e-734c-44bc-a957-137332ecd58a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.816478 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fa80be1e-734c-44bc-a957-137332ecd58a" (UID: "fa80be1e-734c-44bc-a957-137332ecd58a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.843237 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.843278 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.843310 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.843321 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdwqb\" (UniqueName: \"kubernetes.io/projected/fa80be1e-734c-44bc-a957-137332ecd58a-kube-api-access-xdwqb\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.843331 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:23 crc kubenswrapper[4985]: I0128 18:37:23.140953 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-f4mq4"] Jan 28 18:37:23 crc kubenswrapper[4985]: I0128 18:37:23.152146 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-f4mq4"] Jan 28 18:37:23 crc kubenswrapper[4985]: I0128 18:37:23.277522 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" path="/var/lib/kubelet/pods/fa80be1e-734c-44bc-a957-137332ecd58a/volumes" Jan 28 18:37:25 crc kubenswrapper[4985]: I0128 18:37:25.540229 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Jan 28 18:37:26 crc kubenswrapper[4985]: E0128 18:37:26.596665 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" Jan 28 18:37:28 crc kubenswrapper[4985]: E0128 18:37:28.479463 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 28 18:37:28 crc kubenswrapper[4985]: E0128 18:37:28.486243 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szgd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-s8hs9_openstack(feecd29d-1d64-47f4-a1af-e634b7d87f3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:37:28 crc kubenswrapper[4985]: E0128 18:37:28.491619 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-s8hs9" podUID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" Jan 28 18:37:28 crc kubenswrapper[4985]: E0128 18:37:28.892471 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-s8hs9" podUID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" Jan 28 18:37:30 crc kubenswrapper[4985]: E0128 18:37:30.014867 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 28 18:37:30 crc kubenswrapper[4985]: E0128 18:37:30.015081 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h59fh589h4h588h656h68ch87h586h58dhc7hb8h5f6h9dhdh9h585h67fh56ch5ch57dhcch5c7hd7h579hddh58ch77h5dh77h57fh57q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s629,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2d1d02ed-9b38-404a-8926-9d4aaf7bab57): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:37:30 crc kubenswrapper[4985]: E0128 18:37:30.361670 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 28 18:37:30 crc kubenswrapper[4985]: E0128 18:37:30.362020 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8n5mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qjrfx_openstack(dda9fdbc-ce81-4e63-b32f-733379d893d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:37:30 crc kubenswrapper[4985]: E0128 18:37:30.363204 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-qjrfx" podUID="dda9fdbc-ce81-4e63-b32f-733379d893d4" Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.394381 4985 scope.go:117] "RemoveContainer" containerID="b07a966b1eedec1e93ccdffea190010036fa22a709598fabaaf5909bac14f589" Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.915584 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8h4kr" event={"ID":"f788adab-3912-43da-869e-2450d65b761f","Type":"ContainerStarted","Data":"38e38c87534fe5e2e6e7da069589b30c70844285bffd29f51db0ab1e32c6ef5c"} Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.925641 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerStarted","Data":"0f31ce051029b23ddf495fadb6b6c6e764037b32b8a976658fc8f5f168e24bfd"} Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.931513 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d356801-0ed0-4343-87a9-29d23453d621","Type":"ContainerStarted","Data":"d672a1cd2835bd532c59c1d89f245b7417d6804249dc7c63ead12ec5e0ccb77d"} Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.947324 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8h4kr" podStartSLOduration=3.92623114 podStartE2EDuration="44.947300048s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="2026-01-28 18:36:49.283165154 +0000 UTC m=+1420.109727975" lastFinishedPulling="2026-01-28 18:37:30.304234062 +0000 UTC m=+1461.130796883" observedRunningTime="2026-01-28 18:37:30.929752352 +0000 UTC m=+1461.756315173" watchObservedRunningTime="2026-01-28 18:37:30.947300048 +0000 UTC m=+1461.773862869" Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.952038 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9w9wm" event={"ID":"2ba5eedf-14b8-45ce-b738-e41a6daff299","Type":"ContainerStarted","Data":"badce37bfe68dc4bcc676f7b0c786e9f03574bc7e99b889419d42e1d88e90514"} Jan 28 18:37:30 crc kubenswrapper[4985]: E0128 18:37:30.952928 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-qjrfx" podUID="dda9fdbc-ce81-4e63-b32f-733379d893d4" Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.978659 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hlgnm"] Jan 28 18:37:30 crc kubenswrapper[4985]: W0128 18:37:30.980438 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a3199c2_6b1c_4a07_849d_cc92d372c5c3.slice/crio-77878eeec63482b3f2187ac8aabd1b1217827902e6e3f40bc3b8ec22d896f2ea WatchSource:0}: Error finding container 77878eeec63482b3f2187ac8aabd1b1217827902e6e3f40bc3b8ec22d896f2ea: Status 404 returned error can't find the container with id 77878eeec63482b3f2187ac8aabd1b1217827902e6e3f40bc3b8ec22d896f2ea Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.984580 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-9w9wm" podStartSLOduration=3.965918191 podStartE2EDuration="44.98455715s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="2026-01-28 18:36:49.330756058 +0000 UTC m=+1420.157318879" lastFinishedPulling="2026-01-28 18:37:30.349395017 +0000 UTC m=+1461.175957838" observedRunningTime="2026-01-28 18:37:30.969893106 +0000 UTC m=+1461.796455927" watchObservedRunningTime="2026-01-28 18:37:30.98455715 +0000 UTC m=+1461.811119971" Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.993124 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:37:32 crc kubenswrapper[4985]: I0128 18:37:32.565658 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c2c9b96-2033-4221-8667-e24507c76269","Type":"ContainerStarted","Data":"c202d2036ca2a524c7fa057270b0486dc059f15b87694a6661d8c1bd8fb91016"} Jan 28 18:37:32 crc kubenswrapper[4985]: I0128 18:37:32.568995 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"183853eb-591f-4859-9824-550b76c6f115","Type":"ContainerStarted","Data":"824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c"} Jan 28 18:37:32 crc kubenswrapper[4985]: I0128 18:37:32.572572 4985 generic.go:334] "Generic (PLEG): container finished" podID="493defdf-169c-4278-b370-69068ec73439" containerID="0f31ce051029b23ddf495fadb6b6c6e764037b32b8a976658fc8f5f168e24bfd" exitCode=0 Jan 28 18:37:32 crc kubenswrapper[4985]: I0128 18:37:32.572603 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerDied","Data":"0f31ce051029b23ddf495fadb6b6c6e764037b32b8a976658fc8f5f168e24bfd"} Jan 28 18:37:32 crc kubenswrapper[4985]: I0128 18:37:32.574361 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlgnm" event={"ID":"4a3199c2-6b1c-4a07-849d-cc92d372c5c3","Type":"ContainerStarted","Data":"77878eeec63482b3f2187ac8aabd1b1217827902e6e3f40bc3b8ec22d896f2ea"} Jan 28 18:37:32 crc kubenswrapper[4985]: I0128 18:37:32.596110 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=38.596087026 podStartE2EDuration="38.596087026s" podCreationTimestamp="2026-01-28 18:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:32.594200703 +0000 UTC m=+1463.420763534" watchObservedRunningTime="2026-01-28 18:37:32.596087026 +0000 UTC m=+1463.422649847" Jan 28 18:37:34 crc kubenswrapper[4985]: I0128 18:37:34.605425 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"183853eb-591f-4859-9824-550b76c6f115","Type":"ContainerStarted","Data":"1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951"} Jan 28 18:37:34 crc kubenswrapper[4985]: I0128 18:37:34.607520 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlgnm" event={"ID":"4a3199c2-6b1c-4a07-849d-cc92d372c5c3","Type":"ContainerStarted","Data":"bf3748442896f3bbadb859f2d03e272740c521c498e8208b7d4bed6a247a0dd0"} Jan 28 18:37:34 crc kubenswrapper[4985]: I0128 18:37:34.610737 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d356801-0ed0-4343-87a9-29d23453d621","Type":"ContainerStarted","Data":"1bb403b36214d9dd666e2b32bc6b48e4b0145e97098046a0b40fa4f9fdd5bb47"} Jan 28 18:37:34 crc kubenswrapper[4985]: I0128 18:37:34.638190 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=38.638160959 podStartE2EDuration="38.638160959s" podCreationTimestamp="2026-01-28 18:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:34.627296282 +0000 UTC m=+1465.453859103" watchObservedRunningTime="2026-01-28 18:37:34.638160959 +0000 UTC m=+1465.464723790" Jan 28 18:37:34 crc kubenswrapper[4985]: I0128 18:37:34.665189 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hlgnm" podStartSLOduration=28.665164382 podStartE2EDuration="28.665164382s" podCreationTimestamp="2026-01-28 18:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:34.662211708 +0000 UTC m=+1465.488774529" watchObservedRunningTime="2026-01-28 18:37:34.665164382 +0000 UTC m=+1465.491727223" Jan 28 18:37:35 crc kubenswrapper[4985]: I0128 18:37:35.200583 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 18:37:35 crc kubenswrapper[4985]: I0128 18:37:35.200641 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 18:37:35 crc kubenswrapper[4985]: I0128 18:37:35.294422 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 18:37:35 crc kubenswrapper[4985]: I0128 18:37:35.294523 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 18:37:35 crc kubenswrapper[4985]: I0128 18:37:35.624704 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 18:37:35 crc kubenswrapper[4985]: I0128 18:37:35.624780 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.636519 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerStarted","Data":"63e0086da0afee817b7148269b8c4f5d7b0062e853c8143945bbd576d3419249"} Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.637970 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerStarted","Data":"e7c5bbe824f52654b03b71b358549ed805dc4f0a1f3bd28f0c806b7f6c63294e"} Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.641969 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d356801-0ed0-4343-87a9-29d23453d621","Type":"ContainerStarted","Data":"0a11aa37babe5740860c5b2dd431728b72db2aeef53e5c3e5c4896ed88505ab1"} Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.659604 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8fg44" podStartSLOduration=4.166458743 podStartE2EDuration="45.659585648s" podCreationTimestamp="2026-01-28 18:36:51 +0000 UTC" firstStartedPulling="2026-01-28 18:36:54.36655398 +0000 UTC m=+1425.193116801" lastFinishedPulling="2026-01-28 18:37:35.859680885 +0000 UTC m=+1466.686243706" observedRunningTime="2026-01-28 18:37:36.655190864 +0000 UTC m=+1467.481753705" watchObservedRunningTime="2026-01-28 18:37:36.659585648 +0000 UTC m=+1467.486148469" Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.693901 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=51.693876876 podStartE2EDuration="51.693876876s" podCreationTimestamp="2026-01-28 18:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:36.690488591 +0000 UTC m=+1467.517051432" watchObservedRunningTime="2026-01-28 18:37:36.693876876 +0000 UTC m=+1467.520439717" Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.869981 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.870025 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.910498 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.929174 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:37 crc kubenswrapper[4985]: I0128 18:37:37.655578 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:37 crc kubenswrapper[4985]: I0128 18:37:37.655824 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:39 crc kubenswrapper[4985]: I0128 18:37:39.678602 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:37:40 crc kubenswrapper[4985]: I0128 18:37:40.494101 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 28 18:37:42 crc kubenswrapper[4985]: I0128 18:37:42.661424 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:37:42 crc kubenswrapper[4985]: I0128 18:37:42.661916 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:37:42 crc kubenswrapper[4985]: I0128 18:37:42.722759 4985 generic.go:334] "Generic (PLEG): container finished" podID="f788adab-3912-43da-869e-2450d65b761f" containerID="38e38c87534fe5e2e6e7da069589b30c70844285bffd29f51db0ab1e32c6ef5c" exitCode=0 Jan 28 18:37:42 crc kubenswrapper[4985]: I0128 18:37:42.722819 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8h4kr" event={"ID":"f788adab-3912-43da-869e-2450d65b761f","Type":"ContainerDied","Data":"38e38c87534fe5e2e6e7da069589b30c70844285bffd29f51db0ab1e32c6ef5c"} Jan 28 18:37:42 crc kubenswrapper[4985]: I0128 18:37:42.729989 4985 generic.go:334] "Generic (PLEG): container finished" podID="4a3199c2-6b1c-4a07-849d-cc92d372c5c3" containerID="bf3748442896f3bbadb859f2d03e272740c521c498e8208b7d4bed6a247a0dd0" exitCode=0 Jan 28 18:37:42 crc kubenswrapper[4985]: I0128 18:37:42.730026 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlgnm" event={"ID":"4a3199c2-6b1c-4a07-849d-cc92d372c5c3","Type":"ContainerDied","Data":"bf3748442896f3bbadb859f2d03e272740c521c498e8208b7d4bed6a247a0dd0"} Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.249791 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.249887 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.260839 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.287468 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.287673 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.288294 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.730711 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8fg44" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" probeResult="failure" output=< Jan 28 18:37:43 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:37:43 crc kubenswrapper[4985]: > Jan 28 18:37:44 crc kubenswrapper[4985]: I0128 18:37:44.752294 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerStarted","Data":"fce548919236fde4eb5c4991efb646d47ab79f3a48995a81bc461b9b6f0a9077"} Jan 28 18:37:45 crc kubenswrapper[4985]: I0128 18:37:45.493902 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 28 18:37:45 crc kubenswrapper[4985]: I0128 18:37:45.500864 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 28 18:37:45 crc kubenswrapper[4985]: I0128 18:37:45.769789 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 28 18:37:45 crc kubenswrapper[4985]: I0128 18:37:45.791588 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mbtp6" podStartSLOduration=5.351058306 podStartE2EDuration="55.791563516s" podCreationTimestamp="2026-01-28 18:36:50 +0000 UTC" firstStartedPulling="2026-01-28 18:36:53.267315486 +0000 UTC m=+1424.093878307" lastFinishedPulling="2026-01-28 18:37:43.707820696 +0000 UTC m=+1474.534383517" observedRunningTime="2026-01-28 18:37:45.78923752 +0000 UTC m=+1476.615800341" watchObservedRunningTime="2026-01-28 18:37:45.791563516 +0000 UTC m=+1476.618126347" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.070498 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8h4kr" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.082081 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.216293 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-scripts\") pod \"f788adab-3912-43da-869e-2450d65b761f\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.216632 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-combined-ca-bundle\") pod \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.216698 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-credential-keys\") pod \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.216739 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-config-data\") pod \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.216772 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-config-data\") pod \"f788adab-3912-43da-869e-2450d65b761f\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.216800 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-combined-ca-bundle\") pod \"f788adab-3912-43da-869e-2450d65b761f\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.217174 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-scripts\") pod \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.217244 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f788adab-3912-43da-869e-2450d65b761f-logs\") pod \"f788adab-3912-43da-869e-2450d65b761f\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.217291 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-fernet-keys\") pod \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.217383 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5n2d\" (UniqueName: \"kubernetes.io/projected/f788adab-3912-43da-869e-2450d65b761f-kube-api-access-k5n2d\") pod \"f788adab-3912-43da-869e-2450d65b761f\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.217427 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmsb8\" (UniqueName: \"kubernetes.io/projected/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-kube-api-access-wmsb8\") pod \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.217978 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f788adab-3912-43da-869e-2450d65b761f-logs" (OuterVolumeSpecName: "logs") pod "f788adab-3912-43da-869e-2450d65b761f" (UID: "f788adab-3912-43da-869e-2450d65b761f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.232768 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4a3199c2-6b1c-4a07-849d-cc92d372c5c3" (UID: "4a3199c2-6b1c-4a07-849d-cc92d372c5c3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.232810 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-scripts" (OuterVolumeSpecName: "scripts") pod "4a3199c2-6b1c-4a07-849d-cc92d372c5c3" (UID: "4a3199c2-6b1c-4a07-849d-cc92d372c5c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.235243 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4a3199c2-6b1c-4a07-849d-cc92d372c5c3" (UID: "4a3199c2-6b1c-4a07-849d-cc92d372c5c3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.236226 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-scripts" (OuterVolumeSpecName: "scripts") pod "f788adab-3912-43da-869e-2450d65b761f" (UID: "f788adab-3912-43da-869e-2450d65b761f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.236546 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-kube-api-access-wmsb8" (OuterVolumeSpecName: "kube-api-access-wmsb8") pod "4a3199c2-6b1c-4a07-849d-cc92d372c5c3" (UID: "4a3199c2-6b1c-4a07-849d-cc92d372c5c3"). InnerVolumeSpecName "kube-api-access-wmsb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.247647 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f788adab-3912-43da-869e-2450d65b761f-kube-api-access-k5n2d" (OuterVolumeSpecName: "kube-api-access-k5n2d") pod "f788adab-3912-43da-869e-2450d65b761f" (UID: "f788adab-3912-43da-869e-2450d65b761f"). InnerVolumeSpecName "kube-api-access-k5n2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319880 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f788adab-3912-43da-869e-2450d65b761f-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319912 4985 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319926 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5n2d\" (UniqueName: \"kubernetes.io/projected/f788adab-3912-43da-869e-2450d65b761f-kube-api-access-k5n2d\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319942 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmsb8\" (UniqueName: \"kubernetes.io/projected/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-kube-api-access-wmsb8\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319953 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319963 4985 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319975 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.320173 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a3199c2-6b1c-4a07-849d-cc92d372c5c3" (UID: "4a3199c2-6b1c-4a07-849d-cc92d372c5c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.354103 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-config-data" (OuterVolumeSpecName: "config-data") pod "4a3199c2-6b1c-4a07-849d-cc92d372c5c3" (UID: "4a3199c2-6b1c-4a07-849d-cc92d372c5c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.366405 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-config-data" (OuterVolumeSpecName: "config-data") pod "f788adab-3912-43da-869e-2450d65b761f" (UID: "f788adab-3912-43da-869e-2450d65b761f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.374464 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f788adab-3912-43da-869e-2450d65b761f" (UID: "f788adab-3912-43da-869e-2450d65b761f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.422046 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.424597 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.424627 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.424637 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.776329 4985 generic.go:334] "Generic (PLEG): container finished" podID="2ba5eedf-14b8-45ce-b738-e41a6daff299" containerID="badce37bfe68dc4bcc676f7b0c786e9f03574bc7e99b889419d42e1d88e90514" exitCode=0 Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.776402 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9w9wm" event={"ID":"2ba5eedf-14b8-45ce-b738-e41a6daff299","Type":"ContainerDied","Data":"badce37bfe68dc4bcc676f7b0c786e9f03574bc7e99b889419d42e1d88e90514"} Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.779194 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8h4kr" event={"ID":"f788adab-3912-43da-869e-2450d65b761f","Type":"ContainerDied","Data":"a3c254f828427ba506d4802902a1b02512f0a07f8294c8db3817864021b8fd0c"} Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.779241 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3c254f828427ba506d4802902a1b02512f0a07f8294c8db3817864021b8fd0c" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.779202 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8h4kr" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.781158 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qjrfx" event={"ID":"dda9fdbc-ce81-4e63-b32f-733379d893d4","Type":"ContainerStarted","Data":"d27c06d418e20207c2740cbbbe652b37993ed962b6ece756db68f47e6fdcdfce"} Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.783656 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlgnm" event={"ID":"4a3199c2-6b1c-4a07-849d-cc92d372c5c3","Type":"ContainerDied","Data":"77878eeec63482b3f2187ac8aabd1b1217827902e6e3f40bc3b8ec22d896f2ea"} Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.783688 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77878eeec63482b3f2187ac8aabd1b1217827902e6e3f40bc3b8ec22d896f2ea" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.783737 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.790070 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerStarted","Data":"1fe5f92902fe305b4cccf72044e768fdbb447b14f8f898e1c916ebc9978069b4"} Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.840551 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-qjrfx" podStartSLOduration=3.285582642 podStartE2EDuration="1m0.840531229s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="2026-01-28 18:36:48.383499235 +0000 UTC m=+1419.210062056" lastFinishedPulling="2026-01-28 18:37:45.938447822 +0000 UTC m=+1476.765010643" observedRunningTime="2026-01-28 18:37:46.837233516 +0000 UTC m=+1477.663796347" watchObservedRunningTime="2026-01-28 18:37:46.840531229 +0000 UTC m=+1477.667094050" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311073 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-848676699d-9lbcr"] Jan 28 18:37:47 crc kubenswrapper[4985]: E0128 18:37:47.311559 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="init" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311575 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="init" Jan 28 18:37:47 crc kubenswrapper[4985]: E0128 18:37:47.311588 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f788adab-3912-43da-869e-2450d65b761f" containerName="placement-db-sync" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311593 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f788adab-3912-43da-869e-2450d65b761f" containerName="placement-db-sync" Jan 28 18:37:47 crc kubenswrapper[4985]: E0128 18:37:47.311605 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3199c2-6b1c-4a07-849d-cc92d372c5c3" containerName="keystone-bootstrap" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311611 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3199c2-6b1c-4a07-849d-cc92d372c5c3" containerName="keystone-bootstrap" Jan 28 18:37:47 crc kubenswrapper[4985]: E0128 18:37:47.311639 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311644 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311826 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311841 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3199c2-6b1c-4a07-849d-cc92d372c5c3" containerName="keystone-bootstrap" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311862 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f788adab-3912-43da-869e-2450d65b761f" containerName="placement-db-sync" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.313017 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.319680 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.320423 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fpld6" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.320629 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.320806 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.321043 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.337774 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-77c7879f98-bcrvp"] Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.339286 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.346747 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.346970 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.347122 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.347769 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.347857 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g7p4d" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.347979 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.355090 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-848676699d-9lbcr"] Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.367740 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-77c7879f98-bcrvp"] Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473769 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-public-tls-certs\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473827 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-config-data\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473870 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-internal-tls-certs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473898 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-public-tls-certs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473912 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-internal-tls-certs\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473951 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-logs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473980 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-combined-ca-bundle\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473995 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-credential-keys\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474021 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-scripts\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474038 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-combined-ca-bundle\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474071 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-scripts\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474110 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5nj2\" (UniqueName: \"kubernetes.io/projected/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-kube-api-access-m5nj2\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474168 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w9gw\" (UniqueName: \"kubernetes.io/projected/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-kube-api-access-6w9gw\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474207 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-fernet-keys\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474239 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-config-data\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.576616 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w9gw\" (UniqueName: \"kubernetes.io/projected/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-kube-api-access-6w9gw\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577033 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-fernet-keys\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577154 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-config-data\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577314 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-public-tls-certs\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577430 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-config-data\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577566 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-internal-tls-certs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577714 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-public-tls-certs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577806 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-internal-tls-certs\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577962 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-logs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.578090 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-combined-ca-bundle\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.578189 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-credential-keys\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.578337 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-scripts\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.578444 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-combined-ca-bundle\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.578580 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-scripts\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.578686 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5nj2\" (UniqueName: \"kubernetes.io/projected/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-kube-api-access-m5nj2\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.580395 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-logs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.584328 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-credential-keys\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.584392 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-public-tls-certs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.584873 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-scripts\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.585038 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-combined-ca-bundle\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.585155 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-config-data\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.585175 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-config-data\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.585597 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-public-tls-certs\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.589222 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-scripts\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.589601 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-combined-ca-bundle\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.591915 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-internal-tls-certs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.593104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-fernet-keys\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.594337 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-internal-tls-certs\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.597217 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5nj2\" (UniqueName: \"kubernetes.io/projected/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-kube-api-access-m5nj2\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.603052 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w9gw\" (UniqueName: \"kubernetes.io/projected/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-kube-api-access-6w9gw\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.658320 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.671760 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.824674 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s8hs9" event={"ID":"feecd29d-1d64-47f4-a1af-e634b7d87f3a","Type":"ContainerStarted","Data":"ff21852bdb082ecfb847ad06c015a8a45e3369552ad08ad1a4b52a4cb479bc06"} Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.463513 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.488624 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-s8hs9" podStartSLOduration=5.403375852 podStartE2EDuration="1m2.488603968s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="2026-01-28 18:36:48.851331313 +0000 UTC m=+1419.677894134" lastFinishedPulling="2026-01-28 18:37:45.936559429 +0000 UTC m=+1476.763122250" observedRunningTime="2026-01-28 18:37:47.856236945 +0000 UTC m=+1478.682799766" watchObservedRunningTime="2026-01-28 18:37:48.488603968 +0000 UTC m=+1479.315166789" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.570234 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-77c7879f98-bcrvp"] Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.586591 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-848676699d-9lbcr"] Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.608581 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-combined-ca-bundle\") pod \"2ba5eedf-14b8-45ce-b738-e41a6daff299\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.609619 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-db-sync-config-data\") pod \"2ba5eedf-14b8-45ce-b738-e41a6daff299\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.609838 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lcxh\" (UniqueName: \"kubernetes.io/projected/2ba5eedf-14b8-45ce-b738-e41a6daff299-kube-api-access-9lcxh\") pod \"2ba5eedf-14b8-45ce-b738-e41a6daff299\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.614908 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2ba5eedf-14b8-45ce-b738-e41a6daff299" (UID: "2ba5eedf-14b8-45ce-b738-e41a6daff299"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.618875 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba5eedf-14b8-45ce-b738-e41a6daff299-kube-api-access-9lcxh" (OuterVolumeSpecName: "kube-api-access-9lcxh") pod "2ba5eedf-14b8-45ce-b738-e41a6daff299" (UID: "2ba5eedf-14b8-45ce-b738-e41a6daff299"). InnerVolumeSpecName "kube-api-access-9lcxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.650394 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ba5eedf-14b8-45ce-b738-e41a6daff299" (UID: "2ba5eedf-14b8-45ce-b738-e41a6daff299"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.713357 4985 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.713406 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lcxh\" (UniqueName: \"kubernetes.io/projected/2ba5eedf-14b8-45ce-b738-e41a6daff299-kube-api-access-9lcxh\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.713419 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.892438 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77c7879f98-bcrvp" event={"ID":"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b","Type":"ContainerStarted","Data":"6f4553e8c8e44fd69834b780e370098e87fb1e04fc10ff7cc16b7301aa8daf3a"} Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.916521 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.916605 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9w9wm" event={"ID":"2ba5eedf-14b8-45ce-b738-e41a6daff299","Type":"ContainerDied","Data":"d797c3ffe3dba6a95e4e6284ce4ebd9bc07a285808da1bdf5575d32b4671bc8a"} Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.916646 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d797c3ffe3dba6a95e4e6284ce4ebd9bc07a285808da1bdf5575d32b4671bc8a" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.918305 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-848676699d-9lbcr" event={"ID":"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1","Type":"ContainerStarted","Data":"ac23c57e002cb7459b93a282e6b14ac22cc7d6f52a2f2c5a143106c014002033"} Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.063026 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6cc6bcfccd-rh55k"] Jan 28 18:37:49 crc kubenswrapper[4985]: E0128 18:37:49.063650 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba5eedf-14b8-45ce-b738-e41a6daff299" containerName="barbican-db-sync" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.063665 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba5eedf-14b8-45ce-b738-e41a6daff299" containerName="barbican-db-sync" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.063913 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba5eedf-14b8-45ce-b738-e41a6daff299" containerName="barbican-db-sync" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.089916 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.096720 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.097345 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.097757 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fl96f" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.235658 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-config-data-custom\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.235919 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-combined-ca-bundle\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.236014 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-logs\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.236361 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2vf\" (UniqueName: \"kubernetes.io/projected/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-kube-api-access-5z2vf\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.236476 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-config-data\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.251922 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6c84c9469f-9xntt"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.257847 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.260356 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.338862 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-config-data-custom\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.338910 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-config-data\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.338942 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-config-data-custom\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.338965 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-combined-ca-bundle\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.338986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-logs\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.339010 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-combined-ca-bundle\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.339061 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhhkk\" (UniqueName: \"kubernetes.io/projected/d885ddad-ecc9-4b73-ad9e-9da819f95107-kube-api-access-xhhkk\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.339126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z2vf\" (UniqueName: \"kubernetes.io/projected/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-kube-api-access-5z2vf\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.339155 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-config-data\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.339175 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d885ddad-ecc9-4b73-ad9e-9da819f95107-logs\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.346606 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-logs\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.347156 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-combined-ca-bundle\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.362844 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cc6bcfccd-rh55k"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.363118 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c84c9469f-9xntt"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.363131 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-2whmk"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.376227 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-config-data-custom\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.382073 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.382735 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z2vf\" (UniqueName: \"kubernetes.io/projected/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-kube-api-access-5z2vf\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.393400 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-config-data\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.416118 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-2whmk"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.441449 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-config-data-custom\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.441491 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-config-data\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.441582 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-combined-ca-bundle\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.441662 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhkk\" (UniqueName: \"kubernetes.io/projected/d885ddad-ecc9-4b73-ad9e-9da819f95107-kube-api-access-xhhkk\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.450749 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59699bb574-kg5jx"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.453406 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.458791 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d885ddad-ecc9-4b73-ad9e-9da819f95107-logs\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.459284 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d885ddad-ecc9-4b73-ad9e-9da819f95107-logs\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.459612 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.468020 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59699bb574-kg5jx"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.469019 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.513672 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-combined-ca-bundle\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.542773 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhkk\" (UniqueName: \"kubernetes.io/projected/d885ddad-ecc9-4b73-ad9e-9da819f95107-kube-api-access-xhhkk\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.543297 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-config-data-custom\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.547020 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-config-data\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.560850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.560911 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523590c1-de57-4248-aa7f-2c52024d649e-logs\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.560944 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phx57\" (UniqueName: \"kubernetes.io/projected/523590c1-de57-4248-aa7f-2c52024d649e-kube-api-access-phx57\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.560972 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r9fd\" (UniqueName: \"kubernetes.io/projected/960c828e-51af-4e3c-a916-513bc8cbb0ff-kube-api-access-9r9fd\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561022 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data-custom\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561067 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-config\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561085 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561105 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-combined-ca-bundle\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561127 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561157 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561229 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.593734 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.664558 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523590c1-de57-4248-aa7f-2c52024d649e-logs\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.664889 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phx57\" (UniqueName: \"kubernetes.io/projected/523590c1-de57-4248-aa7f-2c52024d649e-kube-api-access-phx57\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.664929 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r9fd\" (UniqueName: \"kubernetes.io/projected/960c828e-51af-4e3c-a916-513bc8cbb0ff-kube-api-access-9r9fd\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.664995 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data-custom\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665051 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665070 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-config\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665090 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-combined-ca-bundle\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665155 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665273 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665325 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.666415 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523590c1-de57-4248-aa7f-2c52024d649e-logs\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.666517 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.667103 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.667785 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.668710 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-config\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.672627 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-combined-ca-bundle\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.673496 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data-custom\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.673748 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.674428 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.692771 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phx57\" (UniqueName: \"kubernetes.io/projected/523590c1-de57-4248-aa7f-2c52024d649e-kube-api-access-phx57\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.700955 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r9fd\" (UniqueName: \"kubernetes.io/projected/960c828e-51af-4e3c-a916-513bc8cbb0ff-kube-api-access-9r9fd\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.757948 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.780377 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.009356 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77c7879f98-bcrvp" event={"ID":"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b","Type":"ContainerStarted","Data":"69a1467b553a6c6558576781ca2b4d8370bd6677cad738b1106e12f17507729c"} Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.009789 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.046875 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-848676699d-9lbcr" event={"ID":"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1","Type":"ContainerStarted","Data":"542eb0db0cbf56f068474f29f6fc77fe5b6a9c54b8c0b18c390c937adb6c8897"} Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.046921 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-848676699d-9lbcr" event={"ID":"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1","Type":"ContainerStarted","Data":"7a083ab4004f72bbdd409db978d2a2bb717e0d1cc28527fe9e0320b124be70ad"} Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.049386 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.055374 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.159286 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-77c7879f98-bcrvp" podStartSLOduration=3.159235554 podStartE2EDuration="3.159235554s" podCreationTimestamp="2026-01-28 18:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:50.076838637 +0000 UTC m=+1480.903401468" watchObservedRunningTime="2026-01-28 18:37:50.159235554 +0000 UTC m=+1480.985798375" Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.238516 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-848676699d-9lbcr" podStartSLOduration=3.238493441 podStartE2EDuration="3.238493441s" podCreationTimestamp="2026-01-28 18:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:50.198916594 +0000 UTC m=+1481.025479435" watchObservedRunningTime="2026-01-28 18:37:50.238493441 +0000 UTC m=+1481.065056262" Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.299461 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cc6bcfccd-rh55k"] Jan 28 18:37:50 crc kubenswrapper[4985]: W0128 18:37:50.375474 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4b18150_cbd6_4c6f_a28b_8c66b1e875f2.slice/crio-0deb2fa615be711bba18d5f5e24ddaf749483a93a2d2bce21ee2afa867b80533 WatchSource:0}: Error finding container 0deb2fa615be711bba18d5f5e24ddaf749483a93a2d2bce21ee2afa867b80533: Status 404 returned error can't find the container with id 0deb2fa615be711bba18d5f5e24ddaf749483a93a2d2bce21ee2afa867b80533 Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.634918 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c84c9469f-9xntt"] Jan 28 18:37:50 crc kubenswrapper[4985]: W0128 18:37:50.643413 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd885ddad_ecc9_4b73_ad9e_9da819f95107.slice/crio-898a9604b4b483a6d2263993a2bdd40850eac63f8b7f263682de22c5e6527f04 WatchSource:0}: Error finding container 898a9604b4b483a6d2263993a2bdd40850eac63f8b7f263682de22c5e6527f04: Status 404 returned error can't find the container with id 898a9604b4b483a6d2263993a2bdd40850eac63f8b7f263682de22c5e6527f04 Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.916893 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-2whmk"] Jan 28 18:37:51 crc kubenswrapper[4985]: I0128 18:37:51.030358 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59699bb574-kg5jx"] Jan 28 18:37:51 crc kubenswrapper[4985]: I0128 18:37:51.035335 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:37:51 crc kubenswrapper[4985]: I0128 18:37:51.035394 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:37:51 crc kubenswrapper[4985]: I0128 18:37:51.059500 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" event={"ID":"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2","Type":"ContainerStarted","Data":"0deb2fa615be711bba18d5f5e24ddaf749483a93a2d2bce21ee2afa867b80533"} Jan 28 18:37:51 crc kubenswrapper[4985]: I0128 18:37:51.061642 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c84c9469f-9xntt" event={"ID":"d885ddad-ecc9-4b73-ad9e-9da819f95107","Type":"ContainerStarted","Data":"898a9604b4b483a6d2263993a2bdd40850eac63f8b7f263682de22c5e6527f04"} Jan 28 18:37:51 crc kubenswrapper[4985]: I0128 18:37:51.063801 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" event={"ID":"960c828e-51af-4e3c-a916-513bc8cbb0ff","Type":"ContainerStarted","Data":"dd0880e0b96ac3a23f885b549586af18ca3a6b0027c6f034c1105c8d228a817a"} Jan 28 18:37:51 crc kubenswrapper[4985]: W0128 18:37:51.088498 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod523590c1_de57_4248_aa7f_2c52024d649e.slice/crio-b40a3df1dc9713a67151a11bf3d8f9d8a40a7e6355071ab385f578c55e29abe5 WatchSource:0}: Error finding container b40a3df1dc9713a67151a11bf3d8f9d8a40a7e6355071ab385f578c55e29abe5: Status 404 returned error can't find the container with id b40a3df1dc9713a67151a11bf3d8f9d8a40a7e6355071ab385f578c55e29abe5 Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.077262 4985 generic.go:334] "Generic (PLEG): container finished" podID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerID="e23d36aeeab5ee663f101fb703501f68e124bafdaaddaec3cfc6864e9e9081f8" exitCode=0 Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.077458 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" event={"ID":"960c828e-51af-4e3c-a916-513bc8cbb0ff","Type":"ContainerDied","Data":"e23d36aeeab5ee663f101fb703501f68e124bafdaaddaec3cfc6864e9e9081f8"} Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.082627 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59699bb574-kg5jx" event={"ID":"523590c1-de57-4248-aa7f-2c52024d649e","Type":"ContainerStarted","Data":"12a6d8e4bde7f2aea885f58652606b47ee06325603d2e65299b0f8ec947adfe6"} Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.082680 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59699bb574-kg5jx" event={"ID":"523590c1-de57-4248-aa7f-2c52024d649e","Type":"ContainerStarted","Data":"b40a3df1dc9713a67151a11bf3d8f9d8a40a7e6355071ab385f578c55e29abe5"} Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.131173 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:37:52 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:37:52 crc kubenswrapper[4985]: > Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.621169 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-668ffb7f9d-shvfm"] Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.623441 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.626849 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.632510 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.659749 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-config-data-custom\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.659880 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b28283-6f65-478e-952d-f965423f413e-logs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.659929 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-combined-ca-bundle\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.659970 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-public-tls-certs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.660089 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p8wl\" (UniqueName: \"kubernetes.io/projected/04b28283-6f65-478e-952d-f965423f413e-kube-api-access-5p8wl\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.660123 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-internal-tls-certs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.660218 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-config-data\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.665264 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-668ffb7f9d-shvfm"] Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.761784 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b28283-6f65-478e-952d-f965423f413e-logs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.762050 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-combined-ca-bundle\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.762082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-public-tls-certs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.762158 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p8wl\" (UniqueName: \"kubernetes.io/projected/04b28283-6f65-478e-952d-f965423f413e-kube-api-access-5p8wl\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.762179 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-internal-tls-certs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.762280 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-config-data\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.762314 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-config-data-custom\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.776071 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b28283-6f65-478e-952d-f965423f413e-logs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.787334 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-config-data-custom\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.788458 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p8wl\" (UniqueName: \"kubernetes.io/projected/04b28283-6f65-478e-952d-f965423f413e-kube-api-access-5p8wl\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.789391 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-config-data\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.789473 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-public-tls-certs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.790624 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-internal-tls-certs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.800730 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-combined-ca-bundle\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.984037 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.115423 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" event={"ID":"960c828e-51af-4e3c-a916-513bc8cbb0ff","Type":"ContainerStarted","Data":"c4611bd9d414c781ca052ec4109964bd6c046f579d3ac38792bf0555f1041a71"} Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.115574 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.118757 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59699bb574-kg5jx" event={"ID":"523590c1-de57-4248-aa7f-2c52024d649e","Type":"ContainerStarted","Data":"2698171664b1988b8d867c63a620b6267012b187c8c37cd874c7c2d885a085f6"} Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.119360 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.119722 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.149034 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" podStartSLOduration=4.149016443 podStartE2EDuration="4.149016443s" podCreationTimestamp="2026-01-28 18:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:53.137692933 +0000 UTC m=+1483.964255754" watchObservedRunningTime="2026-01-28 18:37:53.149016443 +0000 UTC m=+1483.975579264" Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.176305 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59699bb574-kg5jx" podStartSLOduration=4.176289013 podStartE2EDuration="4.176289013s" podCreationTimestamp="2026-01-28 18:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:53.170435387 +0000 UTC m=+1483.996998218" watchObservedRunningTime="2026-01-28 18:37:53.176289013 +0000 UTC m=+1484.002851834" Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.743399 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8fg44" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" probeResult="failure" output=< Jan 28 18:37:53 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:37:53 crc kubenswrapper[4985]: > Jan 28 18:37:54 crc kubenswrapper[4985]: I0128 18:37:54.775915 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-668ffb7f9d-shvfm"] Jan 28 18:37:55 crc kubenswrapper[4985]: I0128 18:37:55.141092 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c84c9469f-9xntt" event={"ID":"d885ddad-ecc9-4b73-ad9e-9da819f95107","Type":"ContainerStarted","Data":"65d032df38073e7eed22de53eed520ab01274bb31a016414dd7747a7dc134f9f"} Jan 28 18:37:55 crc kubenswrapper[4985]: I0128 18:37:55.144559 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" event={"ID":"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2","Type":"ContainerStarted","Data":"b187f34b7b0c1a993d79520e94dd72989fc4652080d3971e8bb237cf1a5f5254"} Jan 28 18:37:58 crc kubenswrapper[4985]: I0128 18:37:58.182937 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-668ffb7f9d-shvfm" event={"ID":"04b28283-6f65-478e-952d-f965423f413e","Type":"ContainerStarted","Data":"1450d3d2d780e38c895e0250be3018badb615c82f768d6a788516b52de14c5ca"} Jan 28 18:37:59 crc kubenswrapper[4985]: I0128 18:37:59.760083 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:59 crc kubenswrapper[4985]: I0128 18:37:59.868567 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbf7x"] Jan 28 18:37:59 crc kubenswrapper[4985]: I0128 18:37:59.868809 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerName="dnsmasq-dns" containerID="cri-o://16a274b711b7c65f8bac3402c7e48f9e20237b3e266544fb803379dddb341a3e" gracePeriod=10 Jan 28 18:38:00 crc kubenswrapper[4985]: I0128 18:38:00.223357 4985 generic.go:334] "Generic (PLEG): container finished" podID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerID="16a274b711b7c65f8bac3402c7e48f9e20237b3e266544fb803379dddb341a3e" exitCode=0 Jan 28 18:38:00 crc kubenswrapper[4985]: I0128 18:38:00.223492 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" event={"ID":"8ab3789a-5136-46f9-94bb-ab43720d0723","Type":"ContainerDied","Data":"16a274b711b7c65f8bac3402c7e48f9e20237b3e266544fb803379dddb341a3e"} Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.250205 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" event={"ID":"8ab3789a-5136-46f9-94bb-ab43720d0723","Type":"ContainerDied","Data":"bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199"} Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.250615 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.250976 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.278079 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-config\") pod \"8ab3789a-5136-46f9-94bb-ab43720d0723\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.278225 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-swift-storage-0\") pod \"8ab3789a-5136-46f9-94bb-ab43720d0723\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.278371 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-svc\") pod \"8ab3789a-5136-46f9-94bb-ab43720d0723\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.278417 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6nkv\" (UniqueName: \"kubernetes.io/projected/8ab3789a-5136-46f9-94bb-ab43720d0723-kube-api-access-g6nkv\") pod \"8ab3789a-5136-46f9-94bb-ab43720d0723\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.278526 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-sb\") pod \"8ab3789a-5136-46f9-94bb-ab43720d0723\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.278553 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-nb\") pod \"8ab3789a-5136-46f9-94bb-ab43720d0723\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.290442 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab3789a-5136-46f9-94bb-ab43720d0723-kube-api-access-g6nkv" (OuterVolumeSpecName: "kube-api-access-g6nkv") pod "8ab3789a-5136-46f9-94bb-ab43720d0723" (UID: "8ab3789a-5136-46f9-94bb-ab43720d0723"). InnerVolumeSpecName "kube-api-access-g6nkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.383473 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6nkv\" (UniqueName: \"kubernetes.io/projected/8ab3789a-5136-46f9-94bb-ab43720d0723-kube-api-access-g6nkv\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.544196 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8ab3789a-5136-46f9-94bb-ab43720d0723" (UID: "8ab3789a-5136-46f9-94bb-ab43720d0723"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.547650 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ab3789a-5136-46f9-94bb-ab43720d0723" (UID: "8ab3789a-5136-46f9-94bb-ab43720d0723"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.556471 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-config" (OuterVolumeSpecName: "config") pod "8ab3789a-5136-46f9-94bb-ab43720d0723" (UID: "8ab3789a-5136-46f9-94bb-ab43720d0723"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.570414 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ab3789a-5136-46f9-94bb-ab43720d0723" (UID: "8ab3789a-5136-46f9-94bb-ab43720d0723"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.576152 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ab3789a-5136-46f9-94bb-ab43720d0723" (UID: "8ab3789a-5136-46f9-94bb-ab43720d0723"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.595626 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.595672 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.595693 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.595704 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.595716 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:01 crc kubenswrapper[4985]: E0128 18:38:01.660855 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.212952 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:02 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:02 crc kubenswrapper[4985]: > Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.261804 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" event={"ID":"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2","Type":"ContainerStarted","Data":"f17b4f1c899896446fc4d315cea6eb1314dd9bdda7a98f219356bcd0896588d7"} Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.264946 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerStarted","Data":"ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6"} Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.265161 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="ceilometer-notification-agent" containerID="cri-o://e7c5bbe824f52654b03b71b358549ed805dc4f0a1f3bd28f0c806b7f6c63294e" gracePeriod=30 Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.265292 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.265333 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="proxy-httpd" containerID="cri-o://ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6" gracePeriod=30 Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.265373 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="sg-core" containerID="cri-o://1fe5f92902fe305b4cccf72044e768fdbb447b14f8f898e1c916ebc9978069b4" gracePeriod=30 Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.274505 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-668ffb7f9d-shvfm" event={"ID":"04b28283-6f65-478e-952d-f965423f413e","Type":"ContainerStarted","Data":"bc5e99b080cb28b67a368202056e01128443f9359cda4cba67410852e4d84ba9"} Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.274559 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-668ffb7f9d-shvfm" event={"ID":"04b28283-6f65-478e-952d-f965423f413e","Type":"ContainerStarted","Data":"5fbbc6c10659230bfc586124b91a3a8cec90cfd9be6b10949193dfdf305e6c6a"} Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.274751 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.274769 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.288540 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c84c9469f-9xntt" event={"ID":"d885ddad-ecc9-4b73-ad9e-9da819f95107","Type":"ContainerStarted","Data":"6beae4c3610560067d7f82af1bd5645b5653e1d0ddb60018480cdd6a1a8157c8"} Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.288610 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.323782 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" podStartSLOduration=9.527566532 podStartE2EDuration="13.323754347s" podCreationTimestamp="2026-01-28 18:37:49 +0000 UTC" firstStartedPulling="2026-01-28 18:37:50.398660903 +0000 UTC m=+1481.225223714" lastFinishedPulling="2026-01-28 18:37:54.194848708 +0000 UTC m=+1485.021411529" observedRunningTime="2026-01-28 18:38:02.277911332 +0000 UTC m=+1493.104474153" watchObservedRunningTime="2026-01-28 18:38:02.323754347 +0000 UTC m=+1493.150317168" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.443491 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-668ffb7f9d-shvfm" podStartSLOduration=10.443468056 podStartE2EDuration="10.443468056s" podCreationTimestamp="2026-01-28 18:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:02.332835123 +0000 UTC m=+1493.159397944" watchObservedRunningTime="2026-01-28 18:38:02.443468056 +0000 UTC m=+1493.270030877" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.452789 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6c84c9469f-9xntt" podStartSLOduration=9.916963425 podStartE2EDuration="13.452771519s" podCreationTimestamp="2026-01-28 18:37:49 +0000 UTC" firstStartedPulling="2026-01-28 18:37:50.661972107 +0000 UTC m=+1481.488534928" lastFinishedPulling="2026-01-28 18:37:54.197780201 +0000 UTC m=+1485.024343022" observedRunningTime="2026-01-28 18:38:02.352994292 +0000 UTC m=+1493.179557113" watchObservedRunningTime="2026-01-28 18:38:02.452771519 +0000 UTC m=+1493.279334340" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.489951 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbf7x"] Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.501295 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbf7x"] Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.655604 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:38:02 crc kubenswrapper[4985]: E0128 18:38:02.743286 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ab3789a_5136_46f9_94bb_ab43720d0723.slice/crio-bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ab3789a_5136_46f9_94bb_ab43720d0723.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d1d02ed_9b38_404a_8926_9d4aaf7bab57.slice/crio-conmon-1fe5f92902fe305b4cccf72044e768fdbb447b14f8f898e1c916ebc9978069b4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d1d02ed_9b38_404a_8926_9d4aaf7bab57.slice/crio-ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d1d02ed_9b38_404a_8926_9d4aaf7bab57.slice/crio-conmon-ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.948116 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.284600 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" path="/var/lib/kubelet/pods/8ab3789a-5136-46f9-94bb-ab43720d0723/volumes" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.317437 4985 generic.go:334] "Generic (PLEG): container finished" podID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerID="ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6" exitCode=0 Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.317473 4985 generic.go:334] "Generic (PLEG): container finished" podID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerID="1fe5f92902fe305b4cccf72044e768fdbb447b14f8f898e1c916ebc9978069b4" exitCode=2 Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.317481 4985 generic.go:334] "Generic (PLEG): container finished" podID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerID="e7c5bbe824f52654b03b71b358549ed805dc4f0a1f3bd28f0c806b7f6c63294e" exitCode=0 Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.317523 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerDied","Data":"ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6"} Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.317575 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerDied","Data":"1fe5f92902fe305b4cccf72044e768fdbb447b14f8f898e1c916ebc9978069b4"} Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.317587 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerDied","Data":"e7c5bbe824f52654b03b71b358549ed805dc4f0a1f3bd28f0c806b7f6c63294e"} Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.721224 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8fg44" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:03 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:03 crc kubenswrapper[4985]: > Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.866464 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.947908 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-log-httpd\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.948273 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-scripts\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.948407 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.948641 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-combined-ca-bundle\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.948780 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-sg-core-conf-yaml\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.949061 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-run-httpd\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.949178 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-config-data\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.949385 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s629\" (UniqueName: \"kubernetes.io/projected/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-kube-api-access-4s629\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.951268 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.953062 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.953146 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.993615 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-kube-api-access-4s629" (OuterVolumeSpecName: "kube-api-access-4s629") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "kube-api-access-4s629". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.012432 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-scripts" (OuterVolumeSpecName: "scripts") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.034930 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.056875 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.056949 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s629\" (UniqueName: \"kubernetes.io/projected/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-kube-api-access-4s629\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.056969 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.062752 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.075406 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-config-data" (OuterVolumeSpecName: "config-data") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.159081 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.159124 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.330523 4985 generic.go:334] "Generic (PLEG): container finished" podID="dda9fdbc-ce81-4e63-b32f-733379d893d4" containerID="d27c06d418e20207c2740cbbbe652b37993ed962b6ece756db68f47e6fdcdfce" exitCode=0 Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.330597 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qjrfx" event={"ID":"dda9fdbc-ce81-4e63-b32f-733379d893d4","Type":"ContainerDied","Data":"d27c06d418e20207c2740cbbbe652b37993ed962b6ece756db68f47e6fdcdfce"} Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.337671 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerDied","Data":"3ae1387fe5106b01146f4fc344eb6732aa4c0dba8627d7a78e6bf597fe2799b6"} Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.337738 4985 scope.go:117] "RemoveContainer" containerID="ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.337933 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.371642 4985 scope.go:117] "RemoveContainer" containerID="1fe5f92902fe305b4cccf72044e768fdbb447b14f8f898e1c916ebc9978069b4" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.422358 4985 scope.go:117] "RemoveContainer" containerID="e7c5bbe824f52654b03b71b358549ed805dc4f0a1f3bd28f0c806b7f6c63294e" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.441404 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.461658 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.475184 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:04 crc kubenswrapper[4985]: E0128 18:38:04.475660 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="ceilometer-notification-agent" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.475683 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="ceilometer-notification-agent" Jan 28 18:38:04 crc kubenswrapper[4985]: E0128 18:38:04.475698 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="proxy-httpd" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.475706 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="proxy-httpd" Jan 28 18:38:04 crc kubenswrapper[4985]: E0128 18:38:04.475728 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="sg-core" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.475736 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="sg-core" Jan 28 18:38:04 crc kubenswrapper[4985]: E0128 18:38:04.475749 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerName="init" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.475756 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerName="init" Jan 28 18:38:04 crc kubenswrapper[4985]: E0128 18:38:04.475779 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerName="dnsmasq-dns" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.475785 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerName="dnsmasq-dns" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.476467 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="ceilometer-notification-agent" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.476491 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="proxy-httpd" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.476507 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="sg-core" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.476519 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerName="dnsmasq-dns" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.478974 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.482486 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.482649 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.496185 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567452 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94qqp\" (UniqueName: \"kubernetes.io/projected/15ab3d09-80d2-4a3b-84d8-09119b2be701-kube-api-access-94qqp\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567545 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-config-data\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567616 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567648 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567673 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-log-httpd\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-scripts\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567798 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-run-httpd\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669368 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94qqp\" (UniqueName: \"kubernetes.io/projected/15ab3d09-80d2-4a3b-84d8-09119b2be701-kube-api-access-94qqp\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669428 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-config-data\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669469 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669489 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669510 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-log-httpd\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669542 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-scripts\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669563 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-run-httpd\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.670103 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-run-httpd\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.670663 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-log-httpd\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.674204 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.674519 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-config-data\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.675367 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-scripts\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.675955 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.697628 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94qqp\" (UniqueName: \"kubernetes.io/projected/15ab3d09-80d2-4a3b-84d8-09119b2be701-kube-api-access-94qqp\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.809858 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.295908 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" path="/var/lib/kubelet/pods/2d1d02ed-9b38-404a-8926-9d4aaf7bab57/volumes" Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.297337 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.350516 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerStarted","Data":"24f37b343823af87929d4be979bf978ca07c8b7fe426ee346d1a058ab94e67be"} Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.772605 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qjrfx" Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.910148 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-config-data\") pod \"dda9fdbc-ce81-4e63-b32f-733379d893d4\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.910609 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n5mf\" (UniqueName: \"kubernetes.io/projected/dda9fdbc-ce81-4e63-b32f-733379d893d4-kube-api-access-8n5mf\") pod \"dda9fdbc-ce81-4e63-b32f-733379d893d4\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.910678 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-combined-ca-bundle\") pod \"dda9fdbc-ce81-4e63-b32f-733379d893d4\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.915150 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dda9fdbc-ce81-4e63-b32f-733379d893d4-kube-api-access-8n5mf" (OuterVolumeSpecName: "kube-api-access-8n5mf") pod "dda9fdbc-ce81-4e63-b32f-733379d893d4" (UID: "dda9fdbc-ce81-4e63-b32f-733379d893d4"). InnerVolumeSpecName "kube-api-access-8n5mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.953154 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dda9fdbc-ce81-4e63-b32f-733379d893d4" (UID: "dda9fdbc-ce81-4e63-b32f-733379d893d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.002627 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-config-data" (OuterVolumeSpecName: "config-data") pod "dda9fdbc-ce81-4e63-b32f-733379d893d4" (UID: "dda9fdbc-ce81-4e63-b32f-733379d893d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.014878 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.014930 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n5mf\" (UniqueName: \"kubernetes.io/projected/dda9fdbc-ce81-4e63-b32f-733379d893d4-kube-api-access-8n5mf\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.014946 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.375266 4985 generic.go:334] "Generic (PLEG): container finished" podID="b64f0d6c-55b7-4eac-85f6-e78b581cbebc" containerID="461350d6795ff69f1fd203af637d4dd96dfc2a84c72f138630ab057e524c2df1" exitCode=0 Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.375293 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dwwcb" event={"ID":"b64f0d6c-55b7-4eac-85f6-e78b581cbebc","Type":"ContainerDied","Data":"461350d6795ff69f1fd203af637d4dd96dfc2a84c72f138630ab057e524c2df1"} Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.378084 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerStarted","Data":"9601c8e2c8b6e4ccc92d4c33c1be8c9239fcb6b941700f4c60e2af655b805d3c"} Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.379478 4985 generic.go:334] "Generic (PLEG): container finished" podID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" containerID="ff21852bdb082ecfb847ad06c015a8a45e3369552ad08ad1a4b52a4cb479bc06" exitCode=0 Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.379527 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s8hs9" event={"ID":"feecd29d-1d64-47f4-a1af-e634b7d87f3a","Type":"ContainerDied","Data":"ff21852bdb082ecfb847ad06c015a8a45e3369552ad08ad1a4b52a4cb479bc06"} Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.381242 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qjrfx" event={"ID":"dda9fdbc-ce81-4e63-b32f-733379d893d4","Type":"ContainerDied","Data":"29e494db6715043d1dade09c32717d476d44c5754f6d809807167b425de76172"} Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.381298 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e494db6715043d1dade09c32717d476d44c5754f6d809807167b425de76172" Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.381358 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qjrfx" Jan 28 18:38:07 crc kubenswrapper[4985]: I0128 18:38:07.393918 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerStarted","Data":"a44911563543df4ca2f6c7e7c98eed8a29c0db3a0dc60c6c03eff54813b88aed"} Jan 28 18:38:07 crc kubenswrapper[4985]: I0128 18:38:07.969842 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.010032 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.080260 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-combined-ca-bundle\") pod \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.080318 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szgd4\" (UniqueName: \"kubernetes.io/projected/feecd29d-1d64-47f4-a1af-e634b7d87f3a-kube-api-access-szgd4\") pod \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.080343 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feecd29d-1d64-47f4-a1af-e634b7d87f3a-etc-machine-id\") pod \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.080371 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-scripts\") pod \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.080393 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-config-data\") pod \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.080439 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-db-sync-config-data\") pod \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.083358 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/feecd29d-1d64-47f4-a1af-e634b7d87f3a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "feecd29d-1d64-47f4-a1af-e634b7d87f3a" (UID: "feecd29d-1d64-47f4-a1af-e634b7d87f3a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.088377 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "feecd29d-1d64-47f4-a1af-e634b7d87f3a" (UID: "feecd29d-1d64-47f4-a1af-e634b7d87f3a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.091366 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-scripts" (OuterVolumeSpecName: "scripts") pod "feecd29d-1d64-47f4-a1af-e634b7d87f3a" (UID: "feecd29d-1d64-47f4-a1af-e634b7d87f3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.091544 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feecd29d-1d64-47f4-a1af-e634b7d87f3a-kube-api-access-szgd4" (OuterVolumeSpecName: "kube-api-access-szgd4") pod "feecd29d-1d64-47f4-a1af-e634b7d87f3a" (UID: "feecd29d-1d64-47f4-a1af-e634b7d87f3a"). InnerVolumeSpecName "kube-api-access-szgd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.136484 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "feecd29d-1d64-47f4-a1af-e634b7d87f3a" (UID: "feecd29d-1d64-47f4-a1af-e634b7d87f3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.155787 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-config-data" (OuterVolumeSpecName: "config-data") pod "feecd29d-1d64-47f4-a1af-e634b7d87f3a" (UID: "feecd29d-1d64-47f4-a1af-e634b7d87f3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.182564 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-config\") pod \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.182789 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx7rs\" (UniqueName: \"kubernetes.io/projected/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-kube-api-access-kx7rs\") pod \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.182827 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-combined-ca-bundle\") pod \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.184428 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.184676 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szgd4\" (UniqueName: \"kubernetes.io/projected/feecd29d-1d64-47f4-a1af-e634b7d87f3a-kube-api-access-szgd4\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.184691 4985 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feecd29d-1d64-47f4-a1af-e634b7d87f3a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.184699 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.184707 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.184715 4985 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.188575 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-kube-api-access-kx7rs" (OuterVolumeSpecName: "kube-api-access-kx7rs") pod "b64f0d6c-55b7-4eac-85f6-e78b581cbebc" (UID: "b64f0d6c-55b7-4eac-85f6-e78b581cbebc"). InnerVolumeSpecName "kube-api-access-kx7rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.215511 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-config" (OuterVolumeSpecName: "config") pod "b64f0d6c-55b7-4eac-85f6-e78b581cbebc" (UID: "b64f0d6c-55b7-4eac-85f6-e78b581cbebc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.218374 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b64f0d6c-55b7-4eac-85f6-e78b581cbebc" (UID: "b64f0d6c-55b7-4eac-85f6-e78b581cbebc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.287367 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.287405 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx7rs\" (UniqueName: \"kubernetes.io/projected/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-kube-api-access-kx7rs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.287430 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.404678 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dwwcb" event={"ID":"b64f0d6c-55b7-4eac-85f6-e78b581cbebc","Type":"ContainerDied","Data":"94e9ea7881e540161402fe0b16a42aca0004dbafe8de2259a73da5d4a537b2b5"} Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.404727 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94e9ea7881e540161402fe0b16a42aca0004dbafe8de2259a73da5d4a537b2b5" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.404795 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.407318 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerStarted","Data":"63b255400568dba8dbf5bfd10074c794164e917c67207e6067421496c44dc275"} Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.408507 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s8hs9" event={"ID":"feecd29d-1d64-47f4-a1af-e634b7d87f3a","Type":"ContainerDied","Data":"1b5ced815ed25f34faa5ff921cdb8509638b39e75db318b0ce2521c26d4d3829"} Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.408537 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b5ced815ed25f34faa5ff921cdb8509638b39e75db318b0ce2521c26d4d3829" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.408612 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.639523 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rbz5c"] Jan 28 18:38:08 crc kubenswrapper[4985]: E0128 18:38:08.640032 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b64f0d6c-55b7-4eac-85f6-e78b581cbebc" containerName="neutron-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.640049 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b64f0d6c-55b7-4eac-85f6-e78b581cbebc" containerName="neutron-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: E0128 18:38:08.640088 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" containerName="cinder-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.640094 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" containerName="cinder-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: E0128 18:38:08.640110 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda9fdbc-ce81-4e63-b32f-733379d893d4" containerName="heat-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.640117 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda9fdbc-ce81-4e63-b32f-733379d893d4" containerName="heat-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.640324 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b64f0d6c-55b7-4eac-85f6-e78b581cbebc" containerName="neutron-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.640349 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" containerName="cinder-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.640358 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda9fdbc-ce81-4e63-b32f-733379d893d4" containerName="heat-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.641582 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.672548 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rbz5c"] Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.782511 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.785013 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.802427 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.802655 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.802771 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.802827 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csvjk\" (UniqueName: \"kubernetes.io/projected/deec912d-352f-4d4a-9259-cf645aab16da-kube-api-access-csvjk\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.802945 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-config\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.803091 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.810398 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d8b8b566d-89qjp"] Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.812812 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.815881 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.816083 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-r9qmf" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.816281 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.816452 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.824075 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.830435 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d8b8b566d-89qjp"] Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.824416 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.824515 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.824567 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cnbtl" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.880905 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.912810 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-combined-ca-bundle\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.912879 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.912983 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913029 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913092 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csvjk\" (UniqueName: \"kubernetes.io/projected/deec912d-352f-4d4a-9259-cf645aab16da-kube-api-access-csvjk\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913213 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-config\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913298 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-config\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913341 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2q6m\" (UniqueName: \"kubernetes.io/projected/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-kube-api-access-x2q6m\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913378 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913424 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2wgv\" (UniqueName: \"kubernetes.io/projected/a93c21ad-4841-48c4-95a2-c2876a2fffd1-kube-api-access-m2wgv\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913471 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913526 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93c21ad-4841-48c4-95a2-c2876a2fffd1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913572 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-ovndb-tls-certs\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913638 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913664 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-httpd-config\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913726 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-scripts\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.914584 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-config\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.934358 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.938783 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.940040 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.943993 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.956159 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rbz5c"] Jan 28 18:38:08 crc kubenswrapper[4985]: E0128 18:38:08.957320 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-csvjk], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" podUID="deec912d-352f-4d4a-9259-cf645aab16da" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.981371 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csvjk\" (UniqueName: \"kubernetes.io/projected/deec912d-352f-4d4a-9259-cf645aab16da-kube-api-access-csvjk\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.987463 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-j67tm"] Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.990084 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016174 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-config\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016238 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2q6m\" (UniqueName: \"kubernetes.io/projected/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-kube-api-access-x2q6m\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016296 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2wgv\" (UniqueName: \"kubernetes.io/projected/a93c21ad-4841-48c4-95a2-c2876a2fffd1-kube-api-access-m2wgv\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016360 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93c21ad-4841-48c4-95a2-c2876a2fffd1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016411 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-ovndb-tls-certs\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016454 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-httpd-config\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016485 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-scripts\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016508 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-combined-ca-bundle\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016527 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016558 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.029553 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93c21ad-4841-48c4-95a2-c2876a2fffd1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.033760 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.045994 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.048703 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-j67tm"] Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.048748 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.049913 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-config\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.066116 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-scripts\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.066865 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-combined-ca-bundle\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.068892 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-ovndb-tls-certs\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.069869 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2q6m\" (UniqueName: \"kubernetes.io/projected/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-kube-api-access-x2q6m\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.082010 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-httpd-config\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.093917 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2wgv\" (UniqueName: \"kubernetes.io/projected/a93c21ad-4841-48c4-95a2-c2876a2fffd1-kube-api-access-m2wgv\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.128793 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.130538 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqm7q\" (UniqueName: \"kubernetes.io/projected/c3a8f8a9-e888-4754-94da-0ef0e972c995-kube-api-access-nqm7q\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.139977 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-config\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.140135 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.141258 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.141476 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.141886 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-svc\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.150334 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.152837 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.157529 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.159840 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.172285 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.246751 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a366d8d5-30e8-4d85-aadc-af770270ffcf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.246894 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-svc\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.246950 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqm7q\" (UniqueName: \"kubernetes.io/projected/c3a8f8a9-e888-4754-94da-0ef0e972c995-kube-api-access-nqm7q\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.246986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-config\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.247065 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.248757 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-svc\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.251234 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.251641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.251671 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.251705 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-scripts\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.251730 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a366d8d5-30e8-4d85-aadc-af770270ffcf-logs\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.251901 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dvwl\" (UniqueName: \"kubernetes.io/projected/a366d8d5-30e8-4d85-aadc-af770270ffcf-kube-api-access-9dvwl\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.252464 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data-custom\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.252600 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.254396 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.254959 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-config\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.255263 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.255471 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.275858 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqm7q\" (UniqueName: \"kubernetes.io/projected/c3a8f8a9-e888-4754-94da-0ef0e972c995-kube-api-access-nqm7q\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.344667 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363429 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363479 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-scripts\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363506 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a366d8d5-30e8-4d85-aadc-af770270ffcf-logs\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363685 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dvwl\" (UniqueName: \"kubernetes.io/projected/a366d8d5-30e8-4d85-aadc-af770270ffcf-kube-api-access-9dvwl\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363785 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data-custom\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363822 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363847 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a366d8d5-30e8-4d85-aadc-af770270ffcf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.371418 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a366d8d5-30e8-4d85-aadc-af770270ffcf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.371474 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a366d8d5-30e8-4d85-aadc-af770270ffcf-logs\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.390585 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.392529 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dvwl\" (UniqueName: \"kubernetes.io/projected/a366d8d5-30e8-4d85-aadc-af770270ffcf-kube-api-access-9dvwl\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.393533 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data-custom\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.417550 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-scripts\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.418486 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.450581 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.481375 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.568891 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-sb\") pod \"deec912d-352f-4d4a-9259-cf645aab16da\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.569001 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-config\") pod \"deec912d-352f-4d4a-9259-cf645aab16da\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.569026 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csvjk\" (UniqueName: \"kubernetes.io/projected/deec912d-352f-4d4a-9259-cf645aab16da-kube-api-access-csvjk\") pod \"deec912d-352f-4d4a-9259-cf645aab16da\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.569110 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-swift-storage-0\") pod \"deec912d-352f-4d4a-9259-cf645aab16da\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.569160 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-nb\") pod \"deec912d-352f-4d4a-9259-cf645aab16da\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.572985 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "deec912d-352f-4d4a-9259-cf645aab16da" (UID: "deec912d-352f-4d4a-9259-cf645aab16da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.573942 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "deec912d-352f-4d4a-9259-cf645aab16da" (UID: "deec912d-352f-4d4a-9259-cf645aab16da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.574383 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-svc\") pod \"deec912d-352f-4d4a-9259-cf645aab16da\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.574421 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "deec912d-352f-4d4a-9259-cf645aab16da" (UID: "deec912d-352f-4d4a-9259-cf645aab16da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.575450 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.575467 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.575487 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.576514 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "deec912d-352f-4d4a-9259-cf645aab16da" (UID: "deec912d-352f-4d4a-9259-cf645aab16da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.576728 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-config" (OuterVolumeSpecName: "config") pod "deec912d-352f-4d4a-9259-cf645aab16da" (UID: "deec912d-352f-4d4a-9259-cf645aab16da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.589083 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deec912d-352f-4d4a-9259-cf645aab16da-kube-api-access-csvjk" (OuterVolumeSpecName: "kube-api-access-csvjk") pod "deec912d-352f-4d4a-9259-cf645aab16da" (UID: "deec912d-352f-4d4a-9259-cf645aab16da"). InnerVolumeSpecName "kube-api-access-csvjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.656667 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.678786 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.678823 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csvjk\" (UniqueName: \"kubernetes.io/projected/deec912d-352f-4d4a-9259-cf645aab16da-kube-api-access-csvjk\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.678833 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.114522 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-j67tm"] Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.131691 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:10 crc kubenswrapper[4985]: W0128 18:38:10.140446 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3a8f8a9_e888_4754_94da_0ef0e972c995.slice/crio-2a25bfd428dd4118e93b5a07dd33258e59fc68c31465c5aecff463045a099bfc WatchSource:0}: Error finding container 2a25bfd428dd4118e93b5a07dd33258e59fc68c31465c5aecff463045a099bfc: Status 404 returned error can't find the container with id 2a25bfd428dd4118e93b5a07dd33258e59fc68c31465c5aecff463045a099bfc Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.303018 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:10 crc kubenswrapper[4985]: W0128 18:38:10.304393 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda366d8d5_30e8_4d85_aadc_af770270ffcf.slice/crio-c2a05a5028ed951640a1c68987fde41ba3b23928ea5eb7e6830b545018a7b678 WatchSource:0}: Error finding container c2a05a5028ed951640a1c68987fde41ba3b23928ea5eb7e6830b545018a7b678: Status 404 returned error can't find the container with id c2a05a5028ed951640a1c68987fde41ba3b23928ea5eb7e6830b545018a7b678 Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.383806 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d8b8b566d-89qjp"] Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.460722 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8b8b566d-89qjp" event={"ID":"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25","Type":"ContainerStarted","Data":"c9f68ac609dd2f41623830c63a61e02d6c06dc430a7f02a9f5349b8bf758436d"} Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.461648 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a93c21ad-4841-48c4-95a2-c2876a2fffd1","Type":"ContainerStarted","Data":"31388f0bf206620f4149df49b7f517c8ef12fb63e7bf921a506b07d05954b8ce"} Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.462480 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" event={"ID":"c3a8f8a9-e888-4754-94da-0ef0e972c995","Type":"ContainerStarted","Data":"2a25bfd428dd4118e93b5a07dd33258e59fc68c31465c5aecff463045a099bfc"} Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.463362 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.463352 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a366d8d5-30e8-4d85-aadc-af770270ffcf","Type":"ContainerStarted","Data":"c2a05a5028ed951640a1c68987fde41ba3b23928ea5eb7e6830b545018a7b678"} Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.530099 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rbz5c"] Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.537163 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rbz5c"] Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.333281 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deec912d-352f-4d4a-9259-cf645aab16da" path="/var/lib/kubelet/pods/deec912d-352f-4d4a-9259-cf645aab16da/volumes" Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.342820 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.516161 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerStarted","Data":"eb06a76353fe34ee6deffdc7776d0fbb5a1fc84d65807faeb9d2ecdc406f4df2"} Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.527484 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8b8b566d-89qjp" event={"ID":"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25","Type":"ContainerStarted","Data":"a733625bfb47d7059258bc779c698483b4c78dfaa9ccfa77793a3686b76016a7"} Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.541121 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" event={"ID":"c3a8f8a9-e888-4754-94da-0ef0e972c995","Type":"ContainerStarted","Data":"c3d6846527cefd541216dec8dce99f14831f1db9f838810b3978ccef4ebab806"} Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.581824 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.911514 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:11.993853 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59699bb574-kg5jx"] Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:11.994087 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59699bb574-kg5jx" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api-log" containerID="cri-o://12a6d8e4bde7f2aea885f58652606b47ee06325603d2e65299b0f8ec947adfe6" gracePeriod=30 Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:11.994601 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59699bb574-kg5jx" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api" containerID="cri-o://2698171664b1988b8d867c63a620b6267012b187c8c37cd874c7c2d885a085f6" gracePeriod=30 Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.157243 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:12 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:12 crc kubenswrapper[4985]: > Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.553284 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a366d8d5-30e8-4d85-aadc-af770270ffcf","Type":"ContainerStarted","Data":"b986e0e1f69c17cd2f90d083a6b23c51f162ab4207d710a60ef1171acf5b47ee"} Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.556084 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8b8b566d-89qjp" event={"ID":"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25","Type":"ContainerStarted","Data":"f57d4bc985319a4e7bd60f9422a7035d136988dd0fb6ceddd52937e21d4ac9bb"} Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.557827 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.560215 4985 generic.go:334] "Generic (PLEG): container finished" podID="523590c1-de57-4248-aa7f-2c52024d649e" containerID="12a6d8e4bde7f2aea885f58652606b47ee06325603d2e65299b0f8ec947adfe6" exitCode=143 Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.560284 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59699bb574-kg5jx" event={"ID":"523590c1-de57-4248-aa7f-2c52024d649e","Type":"ContainerDied","Data":"12a6d8e4bde7f2aea885f58652606b47ee06325603d2e65299b0f8ec947adfe6"} Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.562543 4985 generic.go:334] "Generic (PLEG): container finished" podID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerID="c3d6846527cefd541216dec8dce99f14831f1db9f838810b3978ccef4ebab806" exitCode=0 Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.563031 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" event={"ID":"c3a8f8a9-e888-4754-94da-0ef0e972c995","Type":"ContainerDied","Data":"c3d6846527cefd541216dec8dce99f14831f1db9f838810b3978ccef4ebab806"} Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.563297 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.583817 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d8b8b566d-89qjp" podStartSLOduration=4.583798911 podStartE2EDuration="4.583798911s" podCreationTimestamp="2026-01-28 18:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:12.576083133 +0000 UTC m=+1503.402645964" watchObservedRunningTime="2026-01-28 18:38:12.583798911 +0000 UTC m=+1503.410361732" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.608749 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.123927336 podStartE2EDuration="8.608718075s" podCreationTimestamp="2026-01-28 18:38:04 +0000 UTC" firstStartedPulling="2026-01-28 18:38:05.308140052 +0000 UTC m=+1496.134702873" lastFinishedPulling="2026-01-28 18:38:10.792930791 +0000 UTC m=+1501.619493612" observedRunningTime="2026-01-28 18:38:12.601759478 +0000 UTC m=+1503.428322309" watchObservedRunningTime="2026-01-28 18:38:12.608718075 +0000 UTC m=+1503.435280896" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.722686 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.778801 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.983910 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8fg44"] Jan 28 18:38:14 crc kubenswrapper[4985]: I0128 18:38:14.591694 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8fg44" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" containerID="cri-o://63e0086da0afee817b7148269b8c4f5d7b0062e853c8143945bbd576d3419249" gracePeriod=2 Jan 28 18:38:14 crc kubenswrapper[4985]: I0128 18:38:14.592335 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" event={"ID":"c3a8f8a9-e888-4754-94da-0ef0e972c995","Type":"ContainerStarted","Data":"911e0b914f7e2d1c2f9a2d3c862476c93ef10ae9407c5181272ef05180c08106"} Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.437089 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f49f9645f-bs9wr"] Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.439650 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.442068 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.444372 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.448265 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f49f9645f-bs9wr"] Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.494740 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59699bb574-kg5jx" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": read tcp 10.217.0.2:60860->10.217.0.200:9311: read: connection reset by peer" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.495039 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59699bb574-kg5jx" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": read tcp 10.217.0.2:60868->10.217.0.200:9311: read: connection reset by peer" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.578286 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-httpd-config\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.578703 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-ovndb-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.578748 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-combined-ca-bundle\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.578846 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-config\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.578902 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-public-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.578925 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xhwr\" (UniqueName: \"kubernetes.io/projected/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-kube-api-access-9xhwr\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.579183 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-internal-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.613354 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a366d8d5-30e8-4d85-aadc-af770270ffcf","Type":"ContainerStarted","Data":"0c16b40db29be1f4541e29072a6720c2bc2a288a4cab20fe8917711e722a3ee0"} Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.613507 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api-log" containerID="cri-o://b986e0e1f69c17cd2f90d083a6b23c51f162ab4207d710a60ef1171acf5b47ee" gracePeriod=30 Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.613774 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.614059 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api" containerID="cri-o://0c16b40db29be1f4541e29072a6720c2bc2a288a4cab20fe8917711e722a3ee0" gracePeriod=30 Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.621604 4985 generic.go:334] "Generic (PLEG): container finished" podID="493defdf-169c-4278-b370-69068ec73439" containerID="63e0086da0afee817b7148269b8c4f5d7b0062e853c8143945bbd576d3419249" exitCode=0 Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.621686 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerDied","Data":"63e0086da0afee817b7148269b8c4f5d7b0062e853c8143945bbd576d3419249"} Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.625267 4985 generic.go:334] "Generic (PLEG): container finished" podID="523590c1-de57-4248-aa7f-2c52024d649e" containerID="2698171664b1988b8d867c63a620b6267012b187c8c37cd874c7c2d885a085f6" exitCode=0 Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.625422 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59699bb574-kg5jx" event={"ID":"523590c1-de57-4248-aa7f-2c52024d649e","Type":"ContainerDied","Data":"2698171664b1988b8d867c63a620b6267012b187c8c37cd874c7c2d885a085f6"} Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.625513 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.646801 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.646780176 podStartE2EDuration="6.646780176s" podCreationTimestamp="2026-01-28 18:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:15.641639881 +0000 UTC m=+1506.468202702" watchObservedRunningTime="2026-01-28 18:38:15.646780176 +0000 UTC m=+1506.473342997" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.675289 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" podStartSLOduration=7.67526922 podStartE2EDuration="7.67526922s" podCreationTimestamp="2026-01-28 18:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:15.671322439 +0000 UTC m=+1506.497885270" watchObservedRunningTime="2026-01-28 18:38:15.67526922 +0000 UTC m=+1506.501832041" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.681798 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-ovndb-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.681839 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-combined-ca-bundle\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.681879 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-config\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.681913 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-public-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.681934 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xhwr\" (UniqueName: \"kubernetes.io/projected/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-kube-api-access-9xhwr\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.682021 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-internal-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.682071 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-httpd-config\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.688793 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-combined-ca-bundle\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.688858 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-internal-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.688974 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-config\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.689675 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-httpd-config\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.689890 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-public-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.706165 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-ovndb-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.715844 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xhwr\" (UniqueName: \"kubernetes.io/projected/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-kube-api-access-9xhwr\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.761330 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.217078 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.302013 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt55m\" (UniqueName: \"kubernetes.io/projected/493defdf-169c-4278-b370-69068ec73439-kube-api-access-dt55m\") pod \"493defdf-169c-4278-b370-69068ec73439\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.302060 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-catalog-content\") pod \"493defdf-169c-4278-b370-69068ec73439\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.302218 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-utilities\") pod \"493defdf-169c-4278-b370-69068ec73439\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.303843 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-utilities" (OuterVolumeSpecName: "utilities") pod "493defdf-169c-4278-b370-69068ec73439" (UID: "493defdf-169c-4278-b370-69068ec73439"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.307421 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/493defdf-169c-4278-b370-69068ec73439-kube-api-access-dt55m" (OuterVolumeSpecName: "kube-api-access-dt55m") pod "493defdf-169c-4278-b370-69068ec73439" (UID: "493defdf-169c-4278-b370-69068ec73439"). InnerVolumeSpecName "kube-api-access-dt55m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.364134 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "493defdf-169c-4278-b370-69068ec73439" (UID: "493defdf-169c-4278-b370-69068ec73439"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.408907 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt55m\" (UniqueName: \"kubernetes.io/projected/493defdf-169c-4278-b370-69068ec73439-kube-api-access-dt55m\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.408948 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.408963 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.786890 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerDied","Data":"80ceba888693469af3d53c546cb7c4eba0040a2f5c19424d7894edf743d991ac"} Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.786953 4985 scope.go:117] "RemoveContainer" containerID="63e0086da0afee817b7148269b8c4f5d7b0062e853c8143945bbd576d3419249" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.787118 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.810805 4985 generic.go:334] "Generic (PLEG): container finished" podID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerID="0c16b40db29be1f4541e29072a6720c2bc2a288a4cab20fe8917711e722a3ee0" exitCode=0 Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.811003 4985 generic.go:334] "Generic (PLEG): container finished" podID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerID="b986e0e1f69c17cd2f90d083a6b23c51f162ab4207d710a60ef1171acf5b47ee" exitCode=143 Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.810899 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a366d8d5-30e8-4d85-aadc-af770270ffcf","Type":"ContainerDied","Data":"0c16b40db29be1f4541e29072a6720c2bc2a288a4cab20fe8917711e722a3ee0"} Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.811104 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a366d8d5-30e8-4d85-aadc-af770270ffcf","Type":"ContainerDied","Data":"b986e0e1f69c17cd2f90d083a6b23c51f162ab4207d710a60ef1171acf5b47ee"} Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.835010 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.840433 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8fg44"] Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.852922 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8fg44"] Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.892493 4985 scope.go:117] "RemoveContainer" containerID="0f31ce051029b23ddf495fadb6b6c6e764037b32b8a976658fc8f5f168e24bfd" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.930277 4985 scope.go:117] "RemoveContainer" containerID="bb466fa56833f63c962ba1cccca2fbc2223625dc1bb00585f9df84071452e8e0" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932514 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-combined-ca-bundle\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932580 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dvwl\" (UniqueName: \"kubernetes.io/projected/a366d8d5-30e8-4d85-aadc-af770270ffcf-kube-api-access-9dvwl\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932618 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a366d8d5-30e8-4d85-aadc-af770270ffcf-etc-machine-id\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932682 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-scripts\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932712 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a366d8d5-30e8-4d85-aadc-af770270ffcf-logs\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932756 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932928 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data-custom\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.936149 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a366d8d5-30e8-4d85-aadc-af770270ffcf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.936466 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a366d8d5-30e8-4d85-aadc-af770270ffcf-logs" (OuterVolumeSpecName: "logs") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.939233 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.940007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a366d8d5-30e8-4d85-aadc-af770270ffcf-kube-api-access-9dvwl" (OuterVolumeSpecName: "kube-api-access-9dvwl") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "kube-api-access-9dvwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.944225 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-scripts" (OuterVolumeSpecName: "scripts") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.974277 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.011231 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.041936 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.041985 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.041999 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dvwl\" (UniqueName: \"kubernetes.io/projected/a366d8d5-30e8-4d85-aadc-af770270ffcf-kube-api-access-9dvwl\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.042023 4985 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a366d8d5-30e8-4d85-aadc-af770270ffcf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.042036 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.042049 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a366d8d5-30e8-4d85-aadc-af770270ffcf-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.080535 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data" (OuterVolumeSpecName: "config-data") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.143225 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phx57\" (UniqueName: \"kubernetes.io/projected/523590c1-de57-4248-aa7f-2c52024d649e-kube-api-access-phx57\") pod \"523590c1-de57-4248-aa7f-2c52024d649e\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.143885 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data\") pod \"523590c1-de57-4248-aa7f-2c52024d649e\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.144041 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data-custom\") pod \"523590c1-de57-4248-aa7f-2c52024d649e\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.144064 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523590c1-de57-4248-aa7f-2c52024d649e-logs\") pod \"523590c1-de57-4248-aa7f-2c52024d649e\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.144188 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-combined-ca-bundle\") pod \"523590c1-de57-4248-aa7f-2c52024d649e\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.144753 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.151841 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/523590c1-de57-4248-aa7f-2c52024d649e-logs" (OuterVolumeSpecName: "logs") pod "523590c1-de57-4248-aa7f-2c52024d649e" (UID: "523590c1-de57-4248-aa7f-2c52024d649e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.158106 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "523590c1-de57-4248-aa7f-2c52024d649e" (UID: "523590c1-de57-4248-aa7f-2c52024d649e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.163299 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/523590c1-de57-4248-aa7f-2c52024d649e-kube-api-access-phx57" (OuterVolumeSpecName: "kube-api-access-phx57") pod "523590c1-de57-4248-aa7f-2c52024d649e" (UID: "523590c1-de57-4248-aa7f-2c52024d649e"). InnerVolumeSpecName "kube-api-access-phx57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.198750 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "523590c1-de57-4248-aa7f-2c52024d649e" (UID: "523590c1-de57-4248-aa7f-2c52024d649e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.219461 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data" (OuterVolumeSpecName: "config-data") pod "523590c1-de57-4248-aa7f-2c52024d649e" (UID: "523590c1-de57-4248-aa7f-2c52024d649e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.223620 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f49f9645f-bs9wr"] Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.247733 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.248272 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phx57\" (UniqueName: \"kubernetes.io/projected/523590c1-de57-4248-aa7f-2c52024d649e-kube-api-access-phx57\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.248297 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.248307 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.248315 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523590c1-de57-4248-aa7f-2c52024d649e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.283493 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="493defdf-169c-4278-b370-69068ec73439" path="/var/lib/kubelet/pods/493defdf-169c-4278-b370-69068ec73439/volumes" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.843143 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a93c21ad-4841-48c4-95a2-c2876a2fffd1","Type":"ContainerStarted","Data":"a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d"} Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.848396 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f49f9645f-bs9wr" event={"ID":"2177b5b3-0121-4ff8-93dd-2f9ef36560f4","Type":"ContainerStarted","Data":"b38f86aab01647c33fd931b2887e8306fe6b60c3082f3c8a0524d15753040cbd"} Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.848434 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f49f9645f-bs9wr" event={"ID":"2177b5b3-0121-4ff8-93dd-2f9ef36560f4","Type":"ContainerStarted","Data":"f6a56e7ca2cbe55d9d96a7ec5b4109a59c5bae6874eb564b5e45153daa640a8d"} Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.859971 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a366d8d5-30e8-4d85-aadc-af770270ffcf","Type":"ContainerDied","Data":"c2a05a5028ed951640a1c68987fde41ba3b23928ea5eb7e6830b545018a7b678"} Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.860035 4985 scope.go:117] "RemoveContainer" containerID="0c16b40db29be1f4541e29072a6720c2bc2a288a4cab20fe8917711e722a3ee0" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.861471 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.883430 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59699bb574-kg5jx" event={"ID":"523590c1-de57-4248-aa7f-2c52024d649e","Type":"ContainerDied","Data":"b40a3df1dc9713a67151a11bf3d8f9d8a40a7e6355071ab385f578c55e29abe5"} Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.883546 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.916056 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.939221 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.949900 4985 scope.go:117] "RemoveContainer" containerID="b986e0e1f69c17cd2f90d083a6b23c51f162ab4207d710a60ef1171acf5b47ee" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.950091 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59699bb574-kg5jx"] Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983314 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983858 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="extract-utilities" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983874 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="extract-utilities" Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983882 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983890 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api" Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983898 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983904 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983924 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api-log" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983931 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api-log" Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983943 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983949 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api" Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983960 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api-log" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983966 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api-log" Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983975 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="extract-content" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983981 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="extract-content" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.984187 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.984210 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.984226 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api-log" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.984233 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.984262 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api-log" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.985507 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.990048 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.990302 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.994503 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.024097 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-59699bb574-kg5jx"] Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.037443 4985 scope.go:117] "RemoveContainer" containerID="2698171664b1988b8d867c63a620b6267012b187c8c37cd874c7c2d885a085f6" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068353 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-scripts\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068437 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7r6k\" (UniqueName: \"kubernetes.io/projected/841350c5-b9e8-4331-9282-e129f8152153-kube-api-access-z7r6k\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068470 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-public-tls-certs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068501 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/841350c5-b9e8-4331-9282-e129f8152153-etc-machine-id\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068527 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068548 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-config-data\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/841350c5-b9e8-4331-9282-e129f8152153-logs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068596 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-config-data-custom\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068618 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.099411 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.170322 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/841350c5-b9e8-4331-9282-e129f8152153-etc-machine-id\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.170433 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.170463 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-config-data\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.170485 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/841350c5-b9e8-4331-9282-e129f8152153-logs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.170526 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-config-data-custom\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.170554 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.171097 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-scripts\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.171228 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7r6k\" (UniqueName: \"kubernetes.io/projected/841350c5-b9e8-4331-9282-e129f8152153-kube-api-access-z7r6k\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.171526 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-public-tls-certs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.172155 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/841350c5-b9e8-4331-9282-e129f8152153-logs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.172226 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/841350c5-b9e8-4331-9282-e129f8152153-etc-machine-id\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.178460 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-scripts\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.179123 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.180418 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.182505 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-config-data-custom\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.183909 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-public-tls-certs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.189815 4985 scope.go:117] "RemoveContainer" containerID="12a6d8e4bde7f2aea885f58652606b47ee06325603d2e65299b0f8ec947adfe6" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.195200 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-config-data\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.198102 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7r6k\" (UniqueName: \"kubernetes.io/projected/841350c5-b9e8-4331-9282-e129f8152153-kube-api-access-z7r6k\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.321638 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.899688 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.902043 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a93c21ad-4841-48c4-95a2-c2876a2fffd1","Type":"ContainerStarted","Data":"c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1"} Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.906514 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f49f9645f-bs9wr" event={"ID":"2177b5b3-0121-4ff8-93dd-2f9ef36560f4","Type":"ContainerStarted","Data":"69502d09c3c08ac438a5f391e8367403e3943212e34bd27ffee322b979a426f1"} Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.906668 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.953391 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.151288223 podStartE2EDuration="10.953367108s" podCreationTimestamp="2026-01-28 18:38:08 +0000 UTC" firstStartedPulling="2026-01-28 18:38:10.499489727 +0000 UTC m=+1501.326052548" lastFinishedPulling="2026-01-28 18:38:16.301568612 +0000 UTC m=+1507.128131433" observedRunningTime="2026-01-28 18:38:18.928224548 +0000 UTC m=+1509.754787369" watchObservedRunningTime="2026-01-28 18:38:18.953367108 +0000 UTC m=+1509.779929929" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.976591 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f49f9645f-bs9wr" podStartSLOduration=3.976565713 podStartE2EDuration="3.976565713s" podCreationTimestamp="2026-01-28 18:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:18.949571681 +0000 UTC m=+1509.776134502" watchObservedRunningTime="2026-01-28 18:38:18.976565713 +0000 UTC m=+1509.803128534" Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.130874 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.275600 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="523590c1-de57-4248-aa7f-2c52024d649e" path="/var/lib/kubelet/pods/523590c1-de57-4248-aa7f-2c52024d649e/volumes" Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.276238 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" path="/var/lib/kubelet/pods/a366d8d5-30e8-4d85-aadc-af770270ffcf/volumes" Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.353770 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.430327 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-2whmk"] Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.431069 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="dnsmasq-dns" containerID="cri-o://c4611bd9d414c781ca052ec4109964bd6c046f579d3ac38792bf0555f1041a71" gracePeriod=10 Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.758526 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.199:5353: connect: connection refused" Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.933642 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"841350c5-b9e8-4331-9282-e129f8152153","Type":"ContainerStarted","Data":"7a5dbf9806674a8b402004bcb6241785559d2470172868f6b3f6355f4dbb8231"} Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.934662 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"841350c5-b9e8-4331-9282-e129f8152153","Type":"ContainerStarted","Data":"5eb2c9b2d4b4c7eec82c9b4c50965c1dafe8e72106cb4de112b3e214c5037898"} Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.942318 4985 generic.go:334] "Generic (PLEG): container finished" podID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerID="c4611bd9d414c781ca052ec4109964bd6c046f579d3ac38792bf0555f1041a71" exitCode=0 Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.942415 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" event={"ID":"960c828e-51af-4e3c-a916-513bc8cbb0ff","Type":"ContainerDied","Data":"c4611bd9d414c781ca052ec4109964bd6c046f579d3ac38792bf0555f1041a71"} Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.520055 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-848676699d-9lbcr" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.576298 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-848676699d-9lbcr" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.621095 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.740756 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-swift-storage-0\") pod \"960c828e-51af-4e3c-a916-513bc8cbb0ff\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.740880 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r9fd\" (UniqueName: \"kubernetes.io/projected/960c828e-51af-4e3c-a916-513bc8cbb0ff-kube-api-access-9r9fd\") pod \"960c828e-51af-4e3c-a916-513bc8cbb0ff\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.740908 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-svc\") pod \"960c828e-51af-4e3c-a916-513bc8cbb0ff\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.740978 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-nb\") pod \"960c828e-51af-4e3c-a916-513bc8cbb0ff\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.740997 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-sb\") pod \"960c828e-51af-4e3c-a916-513bc8cbb0ff\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.741073 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-config\") pod \"960c828e-51af-4e3c-a916-513bc8cbb0ff\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.765500 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960c828e-51af-4e3c-a916-513bc8cbb0ff-kube-api-access-9r9fd" (OuterVolumeSpecName: "kube-api-access-9r9fd") pod "960c828e-51af-4e3c-a916-513bc8cbb0ff" (UID: "960c828e-51af-4e3c-a916-513bc8cbb0ff"). InnerVolumeSpecName "kube-api-access-9r9fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.834321 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "960c828e-51af-4e3c-a916-513bc8cbb0ff" (UID: "960c828e-51af-4e3c-a916-513bc8cbb0ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.834540 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-config" (OuterVolumeSpecName: "config") pod "960c828e-51af-4e3c-a916-513bc8cbb0ff" (UID: "960c828e-51af-4e3c-a916-513bc8cbb0ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.844521 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r9fd\" (UniqueName: \"kubernetes.io/projected/960c828e-51af-4e3c-a916-513bc8cbb0ff-kube-api-access-9r9fd\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.844560 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.844603 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.853738 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "960c828e-51af-4e3c-a916-513bc8cbb0ff" (UID: "960c828e-51af-4e3c-a916-513bc8cbb0ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.856543 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "960c828e-51af-4e3c-a916-513bc8cbb0ff" (UID: "960c828e-51af-4e3c-a916-513bc8cbb0ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.900663 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "960c828e-51af-4e3c-a916-513bc8cbb0ff" (UID: "960c828e-51af-4e3c-a916-513bc8cbb0ff"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.946632 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.946667 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.946679 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.957418 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"841350c5-b9e8-4331-9282-e129f8152153","Type":"ContainerStarted","Data":"a748d650d126ab8d46525fd8715fe314f85dc2f6816b2fac2b89d32e528f86ad"} Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.957797 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.961172 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" event={"ID":"960c828e-51af-4e3c-a916-513bc8cbb0ff","Type":"ContainerDied","Data":"dd0880e0b96ac3a23f885b549586af18ca3a6b0027c6f034c1105c8d228a817a"} Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.961234 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.961243 4985 scope.go:117] "RemoveContainer" containerID="c4611bd9d414c781ca052ec4109964bd6c046f579d3ac38792bf0555f1041a71" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.983898 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.983877965 podStartE2EDuration="3.983877965s" podCreationTimestamp="2026-01-28 18:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:20.983562656 +0000 UTC m=+1511.810125497" watchObservedRunningTime="2026-01-28 18:38:20.983877965 +0000 UTC m=+1511.810440786" Jan 28 18:38:21 crc kubenswrapper[4985]: I0128 18:38:21.040449 4985 scope.go:117] "RemoveContainer" containerID="e23d36aeeab5ee663f101fb703501f68e124bafdaaddaec3cfc6864e9e9081f8" Jan 28 18:38:21 crc kubenswrapper[4985]: I0128 18:38:21.074036 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-2whmk"] Jan 28 18:38:21 crc kubenswrapper[4985]: I0128 18:38:21.089358 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-2whmk"] Jan 28 18:38:21 crc kubenswrapper[4985]: I0128 18:38:21.191307 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:38:21 crc kubenswrapper[4985]: I0128 18:38:21.278340 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" path="/var/lib/kubelet/pods/960c828e-51af-4e3c-a916-513bc8cbb0ff/volumes" Jan 28 18:38:22 crc kubenswrapper[4985]: I0128 18:38:22.118107 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:22 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:22 crc kubenswrapper[4985]: > Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.171401 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 18:38:24 crc kubenswrapper[4985]: E0128 18:38:24.172982 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="init" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.173056 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="init" Jan 28 18:38:24 crc kubenswrapper[4985]: E0128 18:38:24.173138 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="dnsmasq-dns" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.173191 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="dnsmasq-dns" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.173509 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="dnsmasq-dns" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.174406 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.176856 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.178169 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.178536 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-664wv" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.183965 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.228576 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d8f391e-0ed3-4969-b61b-5b9d602644fa-openstack-config\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.228749 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57stt\" (UniqueName: \"kubernetes.io/projected/1d8f391e-0ed3-4969-b61b-5b9d602644fa-kube-api-access-57stt\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.228854 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d8f391e-0ed3-4969-b61b-5b9d602644fa-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.228940 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8f391e-0ed3-4969-b61b-5b9d602644fa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.331163 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d8f391e-0ed3-4969-b61b-5b9d602644fa-openstack-config\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.331621 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57stt\" (UniqueName: \"kubernetes.io/projected/1d8f391e-0ed3-4969-b61b-5b9d602644fa-kube-api-access-57stt\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.331703 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d8f391e-0ed3-4969-b61b-5b9d602644fa-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.332142 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d8f391e-0ed3-4969-b61b-5b9d602644fa-openstack-config\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.332766 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8f391e-0ed3-4969-b61b-5b9d602644fa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.349105 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d8f391e-0ed3-4969-b61b-5b9d602644fa-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.349268 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8f391e-0ed3-4969-b61b-5b9d602644fa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.353508 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57stt\" (UniqueName: \"kubernetes.io/projected/1d8f391e-0ed3-4969-b61b-5b9d602644fa-kube-api-access-57stt\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.476116 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.505714 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.541321 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:25 crc kubenswrapper[4985]: I0128 18:38:25.019530 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="cinder-scheduler" containerID="cri-o://a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d" gracePeriod=30 Jan 28 18:38:25 crc kubenswrapper[4985]: I0128 18:38:25.019588 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="probe" containerID="cri-o://c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1" gracePeriod=30 Jan 28 18:38:25 crc kubenswrapper[4985]: I0128 18:38:25.050044 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 18:38:25 crc kubenswrapper[4985]: W0128 18:38:25.064333 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d8f391e_0ed3_4969_b61b_5b9d602644fa.slice/crio-19c2c6b8499bb9e5522440a21baf349cc34c095dc0e31b1ed34b87074564860e WatchSource:0}: Error finding container 19c2c6b8499bb9e5522440a21baf349cc34c095dc0e31b1ed34b87074564860e: Status 404 returned error can't find the container with id 19c2c6b8499bb9e5522440a21baf349cc34c095dc0e31b1ed34b87074564860e Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.032192 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1d8f391e-0ed3-4969-b61b-5b9d602644fa","Type":"ContainerStarted","Data":"19c2c6b8499bb9e5522440a21baf349cc34c095dc0e31b1ed34b87074564860e"} Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.035211 4985 generic.go:334] "Generic (PLEG): container finished" podID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerID="a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d" exitCode=0 Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.035259 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a93c21ad-4841-48c4-95a2-c2876a2fffd1","Type":"ContainerDied","Data":"a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d"} Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.535292 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.592944 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data-custom\") pod \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.593029 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-combined-ca-bundle\") pod \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.593073 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93c21ad-4841-48c4-95a2-c2876a2fffd1-etc-machine-id\") pod \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.593242 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-scripts\") pod \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.593305 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data\") pod \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.593429 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2wgv\" (UniqueName: \"kubernetes.io/projected/a93c21ad-4841-48c4-95a2-c2876a2fffd1-kube-api-access-m2wgv\") pod \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.595275 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93c21ad-4841-48c4-95a2-c2876a2fffd1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a93c21ad-4841-48c4-95a2-c2876a2fffd1" (UID: "a93c21ad-4841-48c4-95a2-c2876a2fffd1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.601025 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a93c21ad-4841-48c4-95a2-c2876a2fffd1-kube-api-access-m2wgv" (OuterVolumeSpecName: "kube-api-access-m2wgv") pod "a93c21ad-4841-48c4-95a2-c2876a2fffd1" (UID: "a93c21ad-4841-48c4-95a2-c2876a2fffd1"). InnerVolumeSpecName "kube-api-access-m2wgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.605016 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a93c21ad-4841-48c4-95a2-c2876a2fffd1" (UID: "a93c21ad-4841-48c4-95a2-c2876a2fffd1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.606571 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-scripts" (OuterVolumeSpecName: "scripts") pod "a93c21ad-4841-48c4-95a2-c2876a2fffd1" (UID: "a93c21ad-4841-48c4-95a2-c2876a2fffd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.684061 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a93c21ad-4841-48c4-95a2-c2876a2fffd1" (UID: "a93c21ad-4841-48c4-95a2-c2876a2fffd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.697341 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.697388 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.697404 4985 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93c21ad-4841-48c4-95a2-c2876a2fffd1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.697415 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.697428 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2wgv\" (UniqueName: \"kubernetes.io/projected/a93c21ad-4841-48c4-95a2-c2876a2fffd1-kube-api-access-m2wgv\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.803454 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data" (OuterVolumeSpecName: "config-data") pod "a93c21ad-4841-48c4-95a2-c2876a2fffd1" (UID: "a93c21ad-4841-48c4-95a2-c2876a2fffd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.902962 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.048727 4985 generic.go:334] "Generic (PLEG): container finished" podID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerID="c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1" exitCode=0 Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.048769 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a93c21ad-4841-48c4-95a2-c2876a2fffd1","Type":"ContainerDied","Data":"c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1"} Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.048797 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a93c21ad-4841-48c4-95a2-c2876a2fffd1","Type":"ContainerDied","Data":"31388f0bf206620f4149df49b7f517c8ef12fb63e7bf921a506b07d05954b8ce"} Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.048816 4985 scope.go:117] "RemoveContainer" containerID="c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.048949 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.088241 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.152263 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.155188 4985 scope.go:117] "RemoveContainer" containerID="a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.185155 4985 scope.go:117] "RemoveContainer" containerID="c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1" Jan 28 18:38:27 crc kubenswrapper[4985]: E0128 18:38:27.186712 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1\": container with ID starting with c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1 not found: ID does not exist" containerID="c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.186756 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1"} err="failed to get container status \"c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1\": rpc error: code = NotFound desc = could not find container \"c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1\": container with ID starting with c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1 not found: ID does not exist" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.186785 4985 scope.go:117] "RemoveContainer" containerID="a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d" Jan 28 18:38:27 crc kubenswrapper[4985]: E0128 18:38:27.187212 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d\": container with ID starting with a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d not found: ID does not exist" containerID="a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.187327 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d"} err="failed to get container status \"a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d\": rpc error: code = NotFound desc = could not find container \"a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d\": container with ID starting with a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d not found: ID does not exist" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.188758 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:27 crc kubenswrapper[4985]: E0128 18:38:27.189303 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="cinder-scheduler" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.189319 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="cinder-scheduler" Jan 28 18:38:27 crc kubenswrapper[4985]: E0128 18:38:27.189341 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="probe" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.189347 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="probe" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.189608 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="cinder-scheduler" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.189623 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="probe" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.190762 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.192987 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.217131 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.276913 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" path="/var/lib/kubelet/pods/a93c21ad-4841-48c4-95a2-c2876a2fffd1/volumes" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.318557 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.318645 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8kcg\" (UniqueName: \"kubernetes.io/projected/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-kube-api-access-l8kcg\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.318673 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-scripts\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.318702 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-config-data\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.318738 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.318765 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.420869 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8kcg\" (UniqueName: \"kubernetes.io/projected/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-kube-api-access-l8kcg\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.420927 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-scripts\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.420990 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-config-data\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.421070 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.421108 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.421425 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.421624 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.426992 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.427074 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-scripts\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.429521 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-config-data\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.439012 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.444165 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8kcg\" (UniqueName: \"kubernetes.io/projected/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-kube-api-access-l8kcg\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.525115 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:38:28 crc kubenswrapper[4985]: I0128 18:38:28.107352 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:28 crc kubenswrapper[4985]: W0128 18:38:28.114165 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07cf4e1d_9eb6_491a_90a5_dc30af589bc0.slice/crio-4649fc8942b17521b7f8f69ff332256f4373b6fcb10a413bb103dc707e5ca7c2 WatchSource:0}: Error finding container 4649fc8942b17521b7f8f69ff332256f4373b6fcb10a413bb103dc707e5ca7c2: Status 404 returned error can't find the container with id 4649fc8942b17521b7f8f69ff332256f4373b6fcb10a413bb103dc707e5ca7c2 Jan 28 18:38:29 crc kubenswrapper[4985]: I0128 18:38:29.103769 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07cf4e1d-9eb6-491a-90a5-dc30af589bc0","Type":"ContainerStarted","Data":"ef7af7392a0a8e8daafa4c29f9a0b623ca6d2a81cb96174c2ed68ac2c092ef4e"} Jan 28 18:38:29 crc kubenswrapper[4985]: I0128 18:38:29.104227 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07cf4e1d-9eb6-491a-90a5-dc30af589bc0","Type":"ContainerStarted","Data":"4649fc8942b17521b7f8f69ff332256f4373b6fcb10a413bb103dc707e5ca7c2"} Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.119318 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07cf4e1d-9eb6-491a-90a5-dc30af589bc0","Type":"ContainerStarted","Data":"534bfab617653e6a11bf66f4138bb11afac7d0216715a337a1291811d3bf5993"} Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.140035 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.140016274 podStartE2EDuration="3.140016274s" podCreationTimestamp="2026-01-28 18:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:30.136933537 +0000 UTC m=+1520.963496368" watchObservedRunningTime="2026-01-28 18:38:30.140016274 +0000 UTC m=+1520.966579115" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.278177 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5bdcb887dc-rxkm6"] Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.280699 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.283482 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.284286 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.285179 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.301282 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5bdcb887dc-rxkm6"] Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.400782 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-public-tls-certs\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.400863 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-config-data\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.400937 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12d4e4cf-9153-4a32-9155-f9d13a248a26-run-httpd\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.400958 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12d4e4cf-9153-4a32-9155-f9d13a248a26-etc-swift\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.400982 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-combined-ca-bundle\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.401049 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12d4e4cf-9153-4a32-9155-f9d13a248a26-log-httpd\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.401080 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c2ms\" (UniqueName: \"kubernetes.io/projected/12d4e4cf-9153-4a32-9155-f9d13a248a26-kube-api-access-2c2ms\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.401127 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-internal-tls-certs\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.502973 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-public-tls-certs\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503042 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-config-data\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503099 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12d4e4cf-9153-4a32-9155-f9d13a248a26-run-httpd\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503119 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12d4e4cf-9153-4a32-9155-f9d13a248a26-etc-swift\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503133 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-combined-ca-bundle\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503176 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12d4e4cf-9153-4a32-9155-f9d13a248a26-log-httpd\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503211 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c2ms\" (UniqueName: \"kubernetes.io/projected/12d4e4cf-9153-4a32-9155-f9d13a248a26-kube-api-access-2c2ms\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503243 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-internal-tls-certs\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503830 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12d4e4cf-9153-4a32-9155-f9d13a248a26-log-httpd\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503888 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12d4e4cf-9153-4a32-9155-f9d13a248a26-run-httpd\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.511909 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-internal-tls-certs\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.518301 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-combined-ca-bundle\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.520091 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-config-data\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.523790 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-public-tls-certs\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.533671 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12d4e4cf-9153-4a32-9155-f9d13a248a26-etc-swift\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.533862 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c2ms\" (UniqueName: \"kubernetes.io/projected/12d4e4cf-9153-4a32-9155-f9d13a248a26-kube-api-access-2c2ms\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.603810 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:31 crc kubenswrapper[4985]: I0128 18:38:31.518949 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5bdcb887dc-rxkm6"] Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.088177 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.094628 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:32 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:32 crc kubenswrapper[4985]: > Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.188235 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" event={"ID":"12d4e4cf-9153-4a32-9155-f9d13a248a26","Type":"ContainerStarted","Data":"c76d58f590fb1f84e984d71f4424979c392b574109a172ab18e201a96d57db73"} Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.188312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" event={"ID":"12d4e4cf-9153-4a32-9155-f9d13a248a26","Type":"ContainerStarted","Data":"c9ef0b82442a9b3cac449cb5f4cc6374930a4ca3be1767ba0c3ecb60f09c6f17"} Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.214404 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.214835 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-central-agent" containerID="cri-o://9601c8e2c8b6e4ccc92d4c33c1be8c9239fcb6b941700f4c60e2af655b805d3c" gracePeriod=30 Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.215844 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="proxy-httpd" containerID="cri-o://eb06a76353fe34ee6deffdc7776d0fbb5a1fc84d65807faeb9d2ecdc406f4df2" gracePeriod=30 Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.215931 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="sg-core" containerID="cri-o://63b255400568dba8dbf5bfd10074c794164e917c67207e6067421496c44dc275" gracePeriod=30 Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.215981 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-notification-agent" containerID="cri-o://a44911563543df4ca2f6c7e7c98eed8a29c0db3a0dc60c6c03eff54813b88aed" gracePeriod=30 Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.336451 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.202:3000/\": read tcp 10.217.0.2:46134->10.217.0.202:3000: read: connection reset by peer" Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.525838 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.979016 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5b5c69f9bd-9jvz9"] Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.986291 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.993414 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.993647 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-9xd8p" Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.993778 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.027444 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b5c69f9bd-9jvz9"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.087778 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.088126 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data-custom\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.088308 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkrx6\" (UniqueName: \"kubernetes.io/projected/0db5c7c8-1c53-42d0-8e23-f1cba882d552-kube-api-access-tkrx6\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.088642 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-combined-ca-bundle\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.088364 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-v8wbr"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.091234 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.117683 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-v8wbr"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190497 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190544 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190575 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data-custom\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190616 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkrx6\" (UniqueName: \"kubernetes.io/projected/0db5c7c8-1c53-42d0-8e23-f1cba882d552-kube-api-access-tkrx6\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190631 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190688 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190714 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-combined-ca-bundle\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190743 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfhgv\" (UniqueName: \"kubernetes.io/projected/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-kube-api-access-xfhgv\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190768 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190830 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-config\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.209525 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data-custom\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.235234 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkrx6\" (UniqueName: \"kubernetes.io/projected/0db5c7c8-1c53-42d0-8e23-f1cba882d552-kube-api-access-tkrx6\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.246005 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-combined-ca-bundle\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.272625 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.298419 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.298511 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.298580 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.298629 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfhgv\" (UniqueName: \"kubernetes.io/projected/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-kube-api-access-xfhgv\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.298656 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.298734 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-config\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.303086 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.303627 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.304097 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.324883 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.326726 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.358194 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-config\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.379815 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" event={"ID":"12d4e4cf-9153-4a32-9155-f9d13a248a26","Type":"ContainerStarted","Data":"d5b1e2d40a41ff7b5f57c600340246acd209e59dba0454a65e70ad1ef8c68529"} Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.379857 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.379868 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.408349 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfhgv\" (UniqueName: \"kubernetes.io/projected/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-kube-api-access-xfhgv\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.427832 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.432827 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-84b7b4c956-xs5qg"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.434480 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.500180 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.502005 4985 generic.go:334] "Generic (PLEG): container finished" podID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerID="eb06a76353fe34ee6deffdc7776d0fbb5a1fc84d65807faeb9d2ecdc406f4df2" exitCode=0 Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.502034 4985 generic.go:334] "Generic (PLEG): container finished" podID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerID="63b255400568dba8dbf5bfd10074c794164e917c67207e6067421496c44dc275" exitCode=2 Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.502041 4985 generic.go:334] "Generic (PLEG): container finished" podID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerID="9601c8e2c8b6e4ccc92d4c33c1be8c9239fcb6b941700f4c60e2af655b805d3c" exitCode=0 Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.502063 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerDied","Data":"eb06a76353fe34ee6deffdc7776d0fbb5a1fc84d65807faeb9d2ecdc406f4df2"} Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.502102 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerDied","Data":"63b255400568dba8dbf5bfd10074c794164e917c67207e6067421496c44dc275"} Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.502112 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerDied","Data":"9601c8e2c8b6e4ccc92d4c33c1be8c9239fcb6b941700f4c60e2af655b805d3c"} Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.529475 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp56n\" (UniqueName: \"kubernetes.io/projected/1373681b-8290-4963-897b-b5b27690e19a-kube-api-access-cp56n\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.529582 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.529794 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-combined-ca-bundle\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.529878 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data-custom\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.586571 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84b7b4c956-xs5qg"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.641344 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp56n\" (UniqueName: \"kubernetes.io/projected/1373681b-8290-4963-897b-b5b27690e19a-kube-api-access-cp56n\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.641435 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.641595 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-combined-ca-bundle\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.641658 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data-custom\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.670291 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data-custom\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.673989 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-combined-ca-bundle\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.675233 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.706303 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5965d558dc-cg7wv"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.707991 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.723438 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp56n\" (UniqueName: \"kubernetes.io/projected/1373681b-8290-4963-897b-b5b27690e19a-kube-api-access-cp56n\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.724193 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.744656 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" podStartSLOduration=3.7446259299999998 podStartE2EDuration="3.74462593s" podCreationTimestamp="2026-01-28 18:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:33.42972859 +0000 UTC m=+1524.256291431" watchObservedRunningTime="2026-01-28 18:38:33.74462593 +0000 UTC m=+1524.571188761" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.775828 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.775872 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-combined-ca-bundle\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.775974 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data-custom\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.776062 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79bgd\" (UniqueName: \"kubernetes.io/projected/89fc2c75-41eb-441e-a171-5c716b823277-kube-api-access-79bgd\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.827203 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5965d558dc-cg7wv"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.878052 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.879006 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79bgd\" (UniqueName: \"kubernetes.io/projected/89fc2c75-41eb-441e-a171-5c716b823277-kube-api-access-79bgd\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.879299 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.879336 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-combined-ca-bundle\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.879569 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data-custom\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.906155 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data-custom\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.911915 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-combined-ca-bundle\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.914435 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79bgd\" (UniqueName: \"kubernetes.io/projected/89fc2c75-41eb-441e-a171-5c716b823277-kube-api-access-79bgd\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.915378 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.030139 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.446588 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-v8wbr"] Jan 28 18:38:34 crc kubenswrapper[4985]: W0128 18:38:34.480143 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0db5c7c8_1c53_42d0_8e23_f1cba882d552.slice/crio-2e057514ac41ec70a53f671ee0d42894f4f84f59f4823dfd07fa681695ec78b8 WatchSource:0}: Error finding container 2e057514ac41ec70a53f671ee0d42894f4f84f59f4823dfd07fa681695ec78b8: Status 404 returned error can't find the container with id 2e057514ac41ec70a53f671ee0d42894f4f84f59f4823dfd07fa681695ec78b8 Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.491782 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b5c69f9bd-9jvz9"] Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.541863 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" event={"ID":"0db5c7c8-1c53-42d0-8e23-f1cba882d552","Type":"ContainerStarted","Data":"2e057514ac41ec70a53f671ee0d42894f4f84f59f4823dfd07fa681695ec78b8"} Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.543414 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" event={"ID":"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b","Type":"ContainerStarted","Data":"124e40d06c3bc6dec66768ab9299f6ec41b3437c9591832dd7f81dc8a3da2106"} Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.812447 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.202:3000/\": dial tcp 10.217.0.202:3000: connect: connection refused" Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.828723 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84b7b4c956-xs5qg"] Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.040407 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5965d558dc-cg7wv"] Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.571757 4985 generic.go:334] "Generic (PLEG): container finished" podID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerID="c2123433fc9db86b4e9f9ac84736c01949000210bd3cce880a9a4ecb7af8212e" exitCode=0 Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.571970 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" event={"ID":"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b","Type":"ContainerDied","Data":"c2123433fc9db86b4e9f9ac84736c01949000210bd3cce880a9a4ecb7af8212e"} Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.589276 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" event={"ID":"1373681b-8290-4963-897b-b5b27690e19a","Type":"ContainerStarted","Data":"7f8aaec146afdcb274b6be4540ed468073cb056ab2a74bd69ec462b02099487a"} Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.598342 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" event={"ID":"0db5c7c8-1c53-42d0-8e23-f1cba882d552","Type":"ContainerStarted","Data":"18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b"} Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.598539 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.600726 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5965d558dc-cg7wv" event={"ID":"89fc2c75-41eb-441e-a171-5c716b823277","Type":"ContainerStarted","Data":"af15e77d0cac085450dbdbf09aea29f94aab86926bae124219c8abb6e3a9c5c2"} Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.646080 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" podStartSLOduration=3.646056133 podStartE2EDuration="3.646056133s" podCreationTimestamp="2026-01-28 18:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:35.636782731 +0000 UTC m=+1526.463345562" watchObservedRunningTime="2026-01-28 18:38:35.646056133 +0000 UTC m=+1526.472618954" Jan 28 18:38:36 crc kubenswrapper[4985]: I0128 18:38:36.623716 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" event={"ID":"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b","Type":"ContainerStarted","Data":"1c42c60ea57a6197ce6f5b78eaab66b518ac9296d9bfa8c605b8d293dcd46e71"} Jan 28 18:38:36 crc kubenswrapper[4985]: I0128 18:38:36.624100 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:36 crc kubenswrapper[4985]: I0128 18:38:36.649124 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" podStartSLOduration=3.64910215 podStartE2EDuration="3.64910215s" podCreationTimestamp="2026-01-28 18:38:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:36.643540153 +0000 UTC m=+1527.470102994" watchObservedRunningTime="2026-01-28 18:38:36.64910215 +0000 UTC m=+1527.475664981" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.032547 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="841350c5-b9e8-4331-9282-e129f8152153" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.209:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.641434 4985 generic.go:334] "Generic (PLEG): container finished" podID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerID="a44911563543df4ca2f6c7e7c98eed8a29c0db3a0dc60c6c03eff54813b88aed" exitCode=0 Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.641513 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerDied","Data":"a44911563543df4ca2f6c7e7c98eed8a29c0db3a0dc60c6c03eff54813b88aed"} Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.817297 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.856636 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964343 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-log-httpd\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964416 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-scripts\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964506 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-config-data\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964534 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-run-httpd\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964643 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94qqp\" (UniqueName: \"kubernetes.io/projected/15ab3d09-80d2-4a3b-84d8-09119b2be701-kube-api-access-94qqp\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964684 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-combined-ca-bundle\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964711 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-sg-core-conf-yaml\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964750 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964901 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.965469 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.965489 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.977846 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-scripts" (OuterVolumeSpecName: "scripts") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.977859 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ab3d09-80d2-4a3b-84d8-09119b2be701-kube-api-access-94qqp" (OuterVolumeSpecName: "kube-api-access-94qqp") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "kube-api-access-94qqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.015444 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.068154 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.068194 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.068205 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94qqp\" (UniqueName: \"kubernetes.io/projected/15ab3d09-80d2-4a3b-84d8-09119b2be701-kube-api-access-94qqp\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.103407 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-config-data" (OuterVolumeSpecName: "config-data") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.129958 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.175898 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.176136 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.662795 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerDied","Data":"24f37b343823af87929d4be979bf978ca07c8b7fe426ee346d1a058ab94e67be"} Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.662883 4985 scope.go:117] "RemoveContainer" containerID="eb06a76353fe34ee6deffdc7776d0fbb5a1fc84d65807faeb9d2ecdc406f4df2" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.662949 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.733603 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.758886 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790019 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:38 crc kubenswrapper[4985]: E0128 18:38:38.790610 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-notification-agent" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790625 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-notification-agent" Jan 28 18:38:38 crc kubenswrapper[4985]: E0128 18:38:38.790648 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="sg-core" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790654 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="sg-core" Jan 28 18:38:38 crc kubenswrapper[4985]: E0128 18:38:38.790675 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="proxy-httpd" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790681 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="proxy-httpd" Jan 28 18:38:38 crc kubenswrapper[4985]: E0128 18:38:38.790694 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-central-agent" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790700 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-central-agent" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790913 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="sg-core" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790929 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-central-agent" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790948 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="proxy-httpd" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790963 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-notification-agent" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.796740 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.801618 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.801744 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.824128 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.923112 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-scripts\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.923167 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.923191 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbqvh\" (UniqueName: \"kubernetes.io/projected/fe11ac1b-2633-40fd-b359-01d3309299a8-kube-api-access-qbqvh\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.923233 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-log-httpd\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.924146 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-config-data\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.924327 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-run-httpd\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.924379 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027474 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-scripts\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027550 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027578 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbqvh\" (UniqueName: \"kubernetes.io/projected/fe11ac1b-2633-40fd-b359-01d3309299a8-kube-api-access-qbqvh\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027652 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-log-httpd\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027699 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-config-data\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027772 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-run-httpd\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027808 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.028686 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-run-httpd\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.028724 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-log-httpd\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.035358 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-scripts\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.035423 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.039378 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.040281 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-config-data\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.134311 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbqvh\" (UniqueName: \"kubernetes.io/projected/fe11ac1b-2633-40fd-b359-01d3309299a8-kube-api-access-qbqvh\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.177794 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.288667 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" path="/var/lib/kubelet/pods/15ab3d09-80d2-4a3b-84d8-09119b2be701/volumes" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.421654 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.158052 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-54bf646c6-b6zb2"] Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.160033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.182909 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-54bf646c6-b6zb2"] Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.207309 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-788f4c49c5-d7wbz"] Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.208935 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.232019 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-788f4c49c5-d7wbz"] Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.265328 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5c6549b6bc-9j9qm"] Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.266823 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.290309 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5c6549b6bc-9j9qm"] Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.356907 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data-custom\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.356984 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.357003 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-combined-ca-bundle\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.357053 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data-custom\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.357071 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-combined-ca-bundle\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.357145 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kscsq\" (UniqueName: \"kubernetes.io/projected/c96952df-fe61-4b70-a166-ebf0dc93bb94-kube-api-access-kscsq\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.357180 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzqd\" (UniqueName: \"kubernetes.io/projected/a907310b-926c-4b8e-b3db-b8a43844891c-kube-api-access-sxzqd\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.357237 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460005 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kscsq\" (UniqueName: \"kubernetes.io/projected/c96952df-fe61-4b70-a166-ebf0dc93bb94-kube-api-access-kscsq\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460092 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzqd\" (UniqueName: \"kubernetes.io/projected/a907310b-926c-4b8e-b3db-b8a43844891c-kube-api-access-sxzqd\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460192 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460288 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460314 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data-custom\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460344 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-combined-ca-bundle\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460363 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460425 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-combined-ca-bundle\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460460 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p66bg\" (UniqueName: \"kubernetes.io/projected/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-kube-api-access-p66bg\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460495 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data-custom\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460515 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-combined-ca-bundle\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460577 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data-custom\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.468310 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data-custom\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.474873 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-combined-ca-bundle\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.478018 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-combined-ca-bundle\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.485076 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.485921 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.489449 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data-custom\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.489668 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzqd\" (UniqueName: \"kubernetes.io/projected/a907310b-926c-4b8e-b3db-b8a43844891c-kube-api-access-sxzqd\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.491091 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kscsq\" (UniqueName: \"kubernetes.io/projected/c96952df-fe61-4b70-a166-ebf0dc93bb94-kube-api-access-kscsq\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.532589 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.565008 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.565066 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-combined-ca-bundle\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.565105 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p66bg\" (UniqueName: \"kubernetes.io/projected/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-kube-api-access-p66bg\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.565165 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data-custom\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.569066 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.572132 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-combined-ca-bundle\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.575104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data-custom\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.593118 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p66bg\" (UniqueName: \"kubernetes.io/projected/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-kube-api-access-p66bg\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.615679 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.622072 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.783792 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.889888 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:41 crc kubenswrapper[4985]: I0128 18:38:41.185720 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:38:41 crc kubenswrapper[4985]: I0128 18:38:41.185775 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:38:42 crc kubenswrapper[4985]: I0128 18:38:42.111642 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:42 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:42 crc kubenswrapper[4985]: > Jan 28 18:38:42 crc kubenswrapper[4985]: I0128 18:38:42.928190 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5965d558dc-cg7wv"] Jan 28 18:38:42 crc kubenswrapper[4985]: I0128 18:38:42.941444 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84b7b4c956-xs5qg"] Jan 28 18:38:42 crc kubenswrapper[4985]: I0128 18:38:42.981323 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-78f74b8b49-ngj6j"] Jan 28 18:38:42 crc kubenswrapper[4985]: I0128 18:38:42.985079 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.007237 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-db4c676cd-xbwzr"] Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.009725 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.010115 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.010310 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.013294 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.013544 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.026197 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78f74b8b49-ngj6j"] Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.041629 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-db4c676cd-xbwzr"] Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.127648 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data-custom\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.127693 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-public-tls-certs\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.127868 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.127914 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128214 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kccj\" (UniqueName: \"kubernetes.io/projected/f0c2a92a-343c-42fa-a740-8bb10701d271-kube-api-access-7kccj\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128452 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-combined-ca-bundle\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128553 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-internal-tls-certs\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128689 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128757 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128910 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnf9z\" (UniqueName: \"kubernetes.io/projected/261340dd-15fd-43d9-8db3-3de095d8728a-kube-api-access-jnf9z\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128996 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data-custom\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.129079 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-public-tls-certs\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.231935 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-public-tls-certs\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232043 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data-custom\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232064 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-public-tls-certs\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232135 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232195 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kccj\" (UniqueName: \"kubernetes.io/projected/f0c2a92a-343c-42fa-a740-8bb10701d271-kube-api-access-7kccj\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232264 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-combined-ca-bundle\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232290 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-internal-tls-certs\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232339 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232356 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232406 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnf9z\" (UniqueName: \"kubernetes.io/projected/261340dd-15fd-43d9-8db3-3de095d8728a-kube-api-access-jnf9z\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232429 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data-custom\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.253779 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-combined-ca-bundle\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.254034 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.254176 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-internal-tls-certs\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.254411 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.255048 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.255570 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.257419 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data-custom\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.257951 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data-custom\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.258360 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kccj\" (UniqueName: \"kubernetes.io/projected/f0c2a92a-343c-42fa-a740-8bb10701d271-kube-api-access-7kccj\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.259556 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-public-tls-certs\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.266618 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnf9z\" (UniqueName: \"kubernetes.io/projected/261340dd-15fd-43d9-8db3-3de095d8728a-kube-api-access-jnf9z\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.274097 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-public-tls-certs\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.383973 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.384560 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.430589 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.603858 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-j67tm"] Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.604156 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="dnsmasq-dns" containerID="cri-o://911e0b914f7e2d1c2f9a2d3c862476c93ef10ae9407c5181272ef05180c08106" gracePeriod=10 Jan 28 18:38:44 crc kubenswrapper[4985]: I0128 18:38:44.265749 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:44 crc kubenswrapper[4985]: I0128 18:38:44.348118 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.206:5353: connect: connection refused" Jan 28 18:38:45 crc kubenswrapper[4985]: I0128 18:38:45.773138 4985 generic.go:334] "Generic (PLEG): container finished" podID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerID="911e0b914f7e2d1c2f9a2d3c862476c93ef10ae9407c5181272ef05180c08106" exitCode=0 Jan 28 18:38:45 crc kubenswrapper[4985]: I0128 18:38:45.773188 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" event={"ID":"c3a8f8a9-e888-4754-94da-0ef0e972c995","Type":"ContainerDied","Data":"911e0b914f7e2d1c2f9a2d3c862476c93ef10ae9407c5181272ef05180c08106"} Jan 28 18:38:45 crc kubenswrapper[4985]: I0128 18:38:45.778611 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:45 crc kubenswrapper[4985]: I0128 18:38:45.841703 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d8b8b566d-89qjp"] Jan 28 18:38:45 crc kubenswrapper[4985]: I0128 18:38:45.841938 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d8b8b566d-89qjp" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-api" containerID="cri-o://a733625bfb47d7059258bc779c698483b4c78dfaa9ccfa77793a3686b76016a7" gracePeriod=30 Jan 28 18:38:45 crc kubenswrapper[4985]: I0128 18:38:45.842439 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d8b8b566d-89qjp" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-httpd" containerID="cri-o://f57d4bc985319a4e7bd60f9422a7035d136988dd0fb6ceddd52937e21d4ac9bb" gracePeriod=30 Jan 28 18:38:46 crc kubenswrapper[4985]: I0128 18:38:46.788719 4985 generic.go:334] "Generic (PLEG): container finished" podID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerID="f57d4bc985319a4e7bd60f9422a7035d136988dd0fb6ceddd52937e21d4ac9bb" exitCode=0 Jan 28 18:38:46 crc kubenswrapper[4985]: I0128 18:38:46.789052 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8b8b566d-89qjp" event={"ID":"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25","Type":"ContainerDied","Data":"f57d4bc985319a4e7bd60f9422a7035d136988dd0fb6ceddd52937e21d4ac9bb"} Jan 28 18:38:47 crc kubenswrapper[4985]: E0128 18:38:47.287935 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 28 18:38:47 crc kubenswrapper[4985]: E0128 18:38:47.288929 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfh5b6h56dh588h4hd5h549h566hbdh68fh56h5dbh5f8h5ch5dch5f8h55dh679h67dh79h678hbh5cch5b8h544h577h576hcfhb8h696h5bbh54q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57stt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(1d8f391e-0ed3-4969-b61b-5b9d602644fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:38:47 crc kubenswrapper[4985]: E0128 18:38:47.290021 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="1d8f391e-0ed3-4969-b61b-5b9d602644fa" Jan 28 18:38:47 crc kubenswrapper[4985]: I0128 18:38:47.362675 4985 scope.go:117] "RemoveContainer" containerID="63b255400568dba8dbf5bfd10074c794164e917c67207e6067421496c44dc275" Jan 28 18:38:47 crc kubenswrapper[4985]: I0128 18:38:47.773438 4985 scope.go:117] "RemoveContainer" containerID="a44911563543df4ca2f6c7e7c98eed8a29c0db3a0dc60c6c03eff54813b88aed" Jan 28 18:38:47 crc kubenswrapper[4985]: I0128 18:38:47.890882 4985 scope.go:117] "RemoveContainer" containerID="9601c8e2c8b6e4ccc92d4c33c1be8c9239fcb6b941700f4c60e2af655b805d3c" Jan 28 18:38:47 crc kubenswrapper[4985]: E0128 18:38:47.890973 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="1d8f391e-0ed3-4969-b61b-5b9d602644fa" Jan 28 18:38:48 crc kubenswrapper[4985]: E0128 18:38:48.138240 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89fc2c75_41eb_441e_a171_5c716b823277.slice/crio-conmon-06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.278728 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.364076 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-config\") pod \"c3a8f8a9-e888-4754-94da-0ef0e972c995\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.364246 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-sb\") pod \"c3a8f8a9-e888-4754-94da-0ef0e972c995\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.364418 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-swift-storage-0\") pod \"c3a8f8a9-e888-4754-94da-0ef0e972c995\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.364484 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-svc\") pod \"c3a8f8a9-e888-4754-94da-0ef0e972c995\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.364521 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-nb\") pod \"c3a8f8a9-e888-4754-94da-0ef0e972c995\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.364668 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqm7q\" (UniqueName: \"kubernetes.io/projected/c3a8f8a9-e888-4754-94da-0ef0e972c995-kube-api-access-nqm7q\") pod \"c3a8f8a9-e888-4754-94da-0ef0e972c995\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.429692 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a8f8a9-e888-4754-94da-0ef0e972c995-kube-api-access-nqm7q" (OuterVolumeSpecName: "kube-api-access-nqm7q") pod "c3a8f8a9-e888-4754-94da-0ef0e972c995" (UID: "c3a8f8a9-e888-4754-94da-0ef0e972c995"). InnerVolumeSpecName "kube-api-access-nqm7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.468261 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqm7q\" (UniqueName: \"kubernetes.io/projected/c3a8f8a9-e888-4754-94da-0ef0e972c995-kube-api-access-nqm7q\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.755729 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3a8f8a9-e888-4754-94da-0ef0e972c995" (UID: "c3a8f8a9-e888-4754-94da-0ef0e972c995"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.778437 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.781437 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-config" (OuterVolumeSpecName: "config") pod "c3a8f8a9-e888-4754-94da-0ef0e972c995" (UID: "c3a8f8a9-e888-4754-94da-0ef0e972c995"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.782126 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3a8f8a9-e888-4754-94da-0ef0e972c995" (UID: "c3a8f8a9-e888-4754-94da-0ef0e972c995"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.791305 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c3a8f8a9-e888-4754-94da-0ef0e972c995" (UID: "c3a8f8a9-e888-4754-94da-0ef0e972c995"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.798170 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3a8f8a9-e888-4754-94da-0ef0e972c995" (UID: "c3a8f8a9-e888-4754-94da-0ef0e972c995"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.851821 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.880320 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.880347 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.880356 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.880364 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.881180 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.881171 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" event={"ID":"c3a8f8a9-e888-4754-94da-0ef0e972c995","Type":"ContainerDied","Data":"2a25bfd428dd4118e93b5a07dd33258e59fc68c31465c5aecff463045a099bfc"} Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.881236 4985 scope.go:117] "RemoveContainer" containerID="911e0b914f7e2d1c2f9a2d3c862476c93ef10ae9407c5181272ef05180c08106" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.921179 4985 scope.go:117] "RemoveContainer" containerID="c3d6846527cefd541216dec8dce99f14831f1db9f838810b3978ccef4ebab806" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.935408 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-j67tm"] Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.954131 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-j67tm"] Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.985319 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-54bf646c6-b6zb2"] Jan 28 18:38:49 crc kubenswrapper[4985]: W0128 18:38:49.006432 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod261340dd_15fd_43d9_8db3_3de095d8728a.slice/crio-21398e04f7c58bcaa01a9d450633b9dd30bf48b5e1dde83202d275ec2b22003a WatchSource:0}: Error finding container 21398e04f7c58bcaa01a9d450633b9dd30bf48b5e1dde83202d275ec2b22003a: Status 404 returned error can't find the container with id 21398e04f7c58bcaa01a9d450633b9dd30bf48b5e1dde83202d275ec2b22003a Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.020302 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5c6549b6bc-9j9qm"] Jan 28 18:38:49 crc kubenswrapper[4985]: W0128 18:38:49.031574 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0c2a92a_343c_42fa_a740_8bb10701d271.slice/crio-949f1904b14ba2cbd62ce6062414ba4496f2a1480543442a29b61571a29497fd WatchSource:0}: Error finding container 949f1904b14ba2cbd62ce6062414ba4496f2a1480543442a29b61571a29497fd: Status 404 returned error can't find the container with id 949f1904b14ba2cbd62ce6062414ba4496f2a1480543442a29b61571a29497fd Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.071399 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-db4c676cd-xbwzr"] Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.092299 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78f74b8b49-ngj6j"] Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.106259 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-788f4c49c5-d7wbz"] Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.283450 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" path="/var/lib/kubelet/pods/c3a8f8a9-e888-4754-94da-0ef0e972c995/volumes" Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.905068 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c6549b6bc-9j9qm" event={"ID":"c2d3f9ad-30d3-4e69-9229-f84c7b43b341","Type":"ContainerStarted","Data":"b124dd8e680ed4c6b21bcff9be1e93e485ca3c7ce4f5a633c143c727e10e2e74"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.907221 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5965d558dc-cg7wv" event={"ID":"89fc2c75-41eb-441e-a171-5c716b823277","Type":"ContainerStarted","Data":"06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.907277 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5965d558dc-cg7wv" podUID="89fc2c75-41eb-441e-a171-5c716b823277" containerName="heat-api" containerID="cri-o://06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00" gracePeriod=60 Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.907296 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.909940 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" event={"ID":"1373681b-8290-4963-897b-b5b27690e19a","Type":"ContainerStarted","Data":"0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.910049 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" podUID="1373681b-8290-4963-897b-b5b27690e19a" containerName="heat-cfnapi" containerID="cri-o://0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a" gracePeriod=60 Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.910132 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.917234 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" event={"ID":"f0c2a92a-343c-42fa-a740-8bb10701d271","Type":"ContainerStarted","Data":"949f1904b14ba2cbd62ce6062414ba4496f2a1480543442a29b61571a29497fd"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.920350 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f74b8b49-ngj6j" event={"ID":"261340dd-15fd-43d9-8db3-3de095d8728a","Type":"ContainerStarted","Data":"21398e04f7c58bcaa01a9d450633b9dd30bf48b5e1dde83202d275ec2b22003a"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.924549 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerStarted","Data":"831d830f0ce8de8c61fae9ceebb6944114447b863f9b44abf86e65cce9b70907"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.933655 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5965d558dc-cg7wv" podStartSLOduration=4.64858077 podStartE2EDuration="16.933630024s" podCreationTimestamp="2026-01-28 18:38:33 +0000 UTC" firstStartedPulling="2026-01-28 18:38:35.09941845 +0000 UTC m=+1525.925981271" lastFinishedPulling="2026-01-28 18:38:47.384467704 +0000 UTC m=+1538.211030525" observedRunningTime="2026-01-28 18:38:49.928054116 +0000 UTC m=+1540.754616947" watchObservedRunningTime="2026-01-28 18:38:49.933630024 +0000 UTC m=+1540.760192845" Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.937654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54bf646c6-b6zb2" event={"ID":"a907310b-926c-4b8e-b3db-b8a43844891c","Type":"ContainerStarted","Data":"c2cd5ecab7f62d49a442677c7f74b95e91134604fb9c330ec7bb5b250544e223"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.941125 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" event={"ID":"c96952df-fe61-4b70-a166-ebf0dc93bb94","Type":"ContainerStarted","Data":"81214ec8d253d3da7a8b05fb6b49e40b2d03873d9fbc8130d3d5a18dff66c068"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.986141 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" podStartSLOduration=4.370591951 podStartE2EDuration="16.986118116s" podCreationTimestamp="2026-01-28 18:38:33 +0000 UTC" firstStartedPulling="2026-01-28 18:38:34.870458636 +0000 UTC m=+1525.697021457" lastFinishedPulling="2026-01-28 18:38:47.485984801 +0000 UTC m=+1538.312547622" observedRunningTime="2026-01-28 18:38:49.948771831 +0000 UTC m=+1540.775334652" watchObservedRunningTime="2026-01-28 18:38:49.986118116 +0000 UTC m=+1540.812680937" Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.983750 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f74b8b49-ngj6j" event={"ID":"261340dd-15fd-43d9-8db3-3de095d8728a","Type":"ContainerStarted","Data":"df4c3bf440a91085353fe1dff162d3bc31eb707fce7be15716ee9580c55e1195"} Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.984112 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.986053 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" event={"ID":"c96952df-fe61-4b70-a166-ebf0dc93bb94","Type":"ContainerStarted","Data":"6e0dbbd9195d83f0174fb3b0f99757882af3ab72ec8d5a94b8cd365a8be3cc2c"} Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.988092 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c6549b6bc-9j9qm" event={"ID":"c2d3f9ad-30d3-4e69-9229-f84c7b43b341","Type":"ContainerStarted","Data":"33ba8acc7f6f2b8493215672a3f6990f3e5a51dcbbcec487f89cacb4a7d893e1"} Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.989962 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" event={"ID":"f0c2a92a-343c-42fa-a740-8bb10701d271","Type":"ContainerStarted","Data":"ff2e4ede92f22c252052c669b18beaa2f7fba2ec3c7930654e6336cf8415f433"} Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.990461 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.992131 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54bf646c6-b6zb2" event={"ID":"a907310b-926c-4b8e-b3db-b8a43844891c","Type":"ContainerStarted","Data":"c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321"} Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.992312 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:51 crc kubenswrapper[4985]: I0128 18:38:51.035490 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-54bf646c6-b6zb2" podStartSLOduration=11.035471161 podStartE2EDuration="11.035471161s" podCreationTimestamp="2026-01-28 18:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:51.022088723 +0000 UTC m=+1541.848651564" watchObservedRunningTime="2026-01-28 18:38:51.035471161 +0000 UTC m=+1541.862033982" Jan 28 18:38:51 crc kubenswrapper[4985]: I0128 18:38:51.046615 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-78f74b8b49-ngj6j" podStartSLOduration=9.046595895 podStartE2EDuration="9.046595895s" podCreationTimestamp="2026-01-28 18:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:51.003994312 +0000 UTC m=+1541.830557133" watchObservedRunningTime="2026-01-28 18:38:51.046595895 +0000 UTC m=+1541.873158716" Jan 28 18:38:51 crc kubenswrapper[4985]: I0128 18:38:51.070077 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" podStartSLOduration=9.070057367 podStartE2EDuration="9.070057367s" podCreationTimestamp="2026-01-28 18:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:51.038746493 +0000 UTC m=+1541.865309334" watchObservedRunningTime="2026-01-28 18:38:51.070057367 +0000 UTC m=+1541.896620188" Jan 28 18:38:52 crc kubenswrapper[4985]: I0128 18:38:52.017190 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:52 crc kubenswrapper[4985]: I0128 18:38:52.018159 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:52 crc kubenswrapper[4985]: I0128 18:38:52.101423 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:52 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:52 crc kubenswrapper[4985]: > Jan 28 18:38:52 crc kubenswrapper[4985]: I0128 18:38:52.109224 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5c6549b6bc-9j9qm" podStartSLOduration=12.109199605 podStartE2EDuration="12.109199605s" podCreationTimestamp="2026-01-28 18:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:52.09060901 +0000 UTC m=+1542.917171831" watchObservedRunningTime="2026-01-28 18:38:52.109199605 +0000 UTC m=+1542.935762426" Jan 28 18:38:52 crc kubenswrapper[4985]: I0128 18:38:52.129242 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" podStartSLOduration=12.12922272 podStartE2EDuration="12.12922272s" podCreationTimestamp="2026-01-28 18:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:52.126776171 +0000 UTC m=+1542.953339002" watchObservedRunningTime="2026-01-28 18:38:52.12922272 +0000 UTC m=+1542.955785541" Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.032946 4985 generic.go:334] "Generic (PLEG): container finished" podID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerID="6e0dbbd9195d83f0174fb3b0f99757882af3ab72ec8d5a94b8cd365a8be3cc2c" exitCode=1 Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.033015 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" event={"ID":"c96952df-fe61-4b70-a166-ebf0dc93bb94","Type":"ContainerDied","Data":"6e0dbbd9195d83f0174fb3b0f99757882af3ab72ec8d5a94b8cd365a8be3cc2c"} Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.033849 4985 scope.go:117] "RemoveContainer" containerID="6e0dbbd9195d83f0174fb3b0f99757882af3ab72ec8d5a94b8cd365a8be3cc2c" Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.037441 4985 generic.go:334] "Generic (PLEG): container finished" podID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerID="33ba8acc7f6f2b8493215672a3f6990f3e5a51dcbbcec487f89cacb4a7d893e1" exitCode=1 Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.037500 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c6549b6bc-9j9qm" event={"ID":"c2d3f9ad-30d3-4e69-9229-f84c7b43b341","Type":"ContainerDied","Data":"33ba8acc7f6f2b8493215672a3f6990f3e5a51dcbbcec487f89cacb4a7d893e1"} Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.037871 4985 scope.go:117] "RemoveContainer" containerID="33ba8acc7f6f2b8493215672a3f6990f3e5a51dcbbcec487f89cacb4a7d893e1" Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.287166 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.287752 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-log" containerID="cri-o://c1278cfba933f75936a9894cfaa710f2d276954aafea6a97d46314226d60c19f" gracePeriod=30 Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.288031 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-httpd" containerID="cri-o://c202d2036ca2a524c7fa057270b0486dc059f15b87694a6661d8c1bd8fb91016" gracePeriod=30 Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.452027 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.052030 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c6549b6bc-9j9qm" event={"ID":"c2d3f9ad-30d3-4e69-9229-f84c7b43b341","Type":"ContainerStarted","Data":"7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2"} Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.052176 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.054190 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" event={"ID":"c96952df-fe61-4b70-a166-ebf0dc93bb94","Type":"ContainerStarted","Data":"abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915"} Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.054420 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.057131 4985 generic.go:334] "Generic (PLEG): container finished" podID="8c2c9b96-2033-4221-8667-e24507c76269" containerID="c1278cfba933f75936a9894cfaa710f2d276954aafea6a97d46314226d60c19f" exitCode=143 Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.057163 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c2c9b96-2033-4221-8667-e24507c76269","Type":"ContainerDied","Data":"c1278cfba933f75936a9894cfaa710f2d276954aafea6a97d46314226d60c19f"} Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.876668 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.877762 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-log" containerID="cri-o://824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c" gracePeriod=30 Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.877879 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-httpd" containerID="cri-o://1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951" gracePeriod=30 Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.070010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerStarted","Data":"6264c75e309967c9f20db46eab077cb1a5ee5f417ccd8f79e08cda266fd4cda5"} Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.071973 4985 generic.go:334] "Generic (PLEG): container finished" podID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerID="abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915" exitCode=1 Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.072019 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" event={"ID":"c96952df-fe61-4b70-a166-ebf0dc93bb94","Type":"ContainerDied","Data":"abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915"} Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.072098 4985 scope.go:117] "RemoveContainer" containerID="6e0dbbd9195d83f0174fb3b0f99757882af3ab72ec8d5a94b8cd365a8be3cc2c" Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.072821 4985 scope.go:117] "RemoveContainer" containerID="abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915" Jan 28 18:38:55 crc kubenswrapper[4985]: E0128 18:38:55.073076 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-788f4c49c5-d7wbz_openstack(c96952df-fe61-4b70-a166-ebf0dc93bb94)\"" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.077280 4985 generic.go:334] "Generic (PLEG): container finished" podID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerID="7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2" exitCode=1 Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.077405 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c6549b6bc-9j9qm" event={"ID":"c2d3f9ad-30d3-4e69-9229-f84c7b43b341","Type":"ContainerDied","Data":"7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2"} Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.077766 4985 scope.go:117] "RemoveContainer" containerID="7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2" Jan 28 18:38:55 crc kubenswrapper[4985]: E0128 18:38:55.078016 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5c6549b6bc-9j9qm_openstack(c2d3f9ad-30d3-4e69-9229-f84c7b43b341)\"" pod="openstack/heat-api-5c6549b6bc-9j9qm" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.085805 4985 generic.go:334] "Generic (PLEG): container finished" podID="183853eb-591f-4859-9824-550b76c6f115" containerID="824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c" exitCode=143 Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.085856 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"183853eb-591f-4859-9824-550b76c6f115","Type":"ContainerDied","Data":"824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c"} Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.160624 4985 scope.go:117] "RemoveContainer" containerID="33ba8acc7f6f2b8493215672a3f6990f3e5a51dcbbcec487f89cacb4a7d893e1" Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.533687 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.891905 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.101719 4985 scope.go:117] "RemoveContainer" containerID="abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915" Jan 28 18:38:56 crc kubenswrapper[4985]: E0128 18:38:56.102371 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-788f4c49c5-d7wbz_openstack(c96952df-fe61-4b70-a166-ebf0dc93bb94)\"" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.106860 4985 scope.go:117] "RemoveContainer" containerID="7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2" Jan 28 18:38:56 crc kubenswrapper[4985]: E0128 18:38:56.107081 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5c6549b6bc-9j9qm_openstack(c2d3f9ad-30d3-4e69-9229-f84c7b43b341)\"" pod="openstack/heat-api-5c6549b6bc-9j9qm" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.114684 4985 generic.go:334] "Generic (PLEG): container finished" podID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerID="a733625bfb47d7059258bc779c698483b4c78dfaa9ccfa77793a3686b76016a7" exitCode=0 Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.114761 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8b8b566d-89qjp" event={"ID":"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25","Type":"ContainerDied","Data":"a733625bfb47d7059258bc779c698483b4c78dfaa9ccfa77793a3686b76016a7"} Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.122893 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerStarted","Data":"ebfc9ea99db013235f5adee2c18ba99af05a9f8dc3abaf0616d7d804e0cb54cc"} Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.132801 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.558943 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.192:9292/healthcheck\": read tcp 10.217.0.2:50556->10.217.0.192:9292: read: connection reset by peer" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.559845 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.192:9292/healthcheck\": read tcp 10.217.0.2:50566->10.217.0.192:9292: read: connection reset by peer" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.746401 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.811325 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2q6m\" (UniqueName: \"kubernetes.io/projected/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-kube-api-access-x2q6m\") pod \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.811809 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-httpd-config\") pod \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.812389 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-combined-ca-bundle\") pod \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.812415 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-ovndb-tls-certs\") pod \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.812501 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-config\") pod \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.816481 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-kube-api-access-x2q6m" (OuterVolumeSpecName: "kube-api-access-x2q6m") pod "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" (UID: "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25"). InnerVolumeSpecName "kube-api-access-x2q6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.864261 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" (UID: "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.918204 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2q6m\" (UniqueName: \"kubernetes.io/projected/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-kube-api-access-x2q6m\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.918238 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.024353 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-config" (OuterVolumeSpecName: "config") pod "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" (UID: "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.049232 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" (UID: "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.122925 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.122960 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.148649 4985 generic.go:334] "Generic (PLEG): container finished" podID="8c2c9b96-2033-4221-8667-e24507c76269" containerID="c202d2036ca2a524c7fa057270b0486dc059f15b87694a6661d8c1bd8fb91016" exitCode=0 Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.148708 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c2c9b96-2033-4221-8667-e24507c76269","Type":"ContainerDied","Data":"c202d2036ca2a524c7fa057270b0486dc059f15b87694a6661d8c1bd8fb91016"} Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.167520 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8b8b566d-89qjp" event={"ID":"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25","Type":"ContainerDied","Data":"c9f68ac609dd2f41623830c63a61e02d6c06dc430a7f02a9f5349b8bf758436d"} Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.167580 4985 scope.go:117] "RemoveContainer" containerID="f57d4bc985319a4e7bd60f9422a7035d136988dd0fb6ceddd52937e21d4ac9bb" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.167727 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.175098 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" (UID: "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.196353 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerStarted","Data":"a38360ca0387e0827a57f03126984e0a20e5b118f82925b6ad3b02f72f4d6f3b"} Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.197027 4985 scope.go:117] "RemoveContainer" containerID="7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.197187 4985 scope.go:117] "RemoveContainer" containerID="abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915" Jan 28 18:38:57 crc kubenswrapper[4985]: E0128 18:38:57.197425 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5c6549b6bc-9j9qm_openstack(c2d3f9ad-30d3-4e69-9229-f84c7b43b341)\"" pod="openstack/heat-api-5c6549b6bc-9j9qm" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" Jan 28 18:38:57 crc kubenswrapper[4985]: E0128 18:38:57.197432 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-788f4c49c5-d7wbz_openstack(c96952df-fe61-4b70-a166-ebf0dc93bb94)\"" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.216661 4985 scope.go:117] "RemoveContainer" containerID="a733625bfb47d7059258bc779c698483b4c78dfaa9ccfa77793a3686b76016a7" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.225504 4985 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.267627 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331553 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331631 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-logs\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331681 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-combined-ca-bundle\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331713 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-httpd-run\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331890 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh6l7\" (UniqueName: \"kubernetes.io/projected/8c2c9b96-2033-4221-8667-e24507c76269-kube-api-access-nh6l7\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331929 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-public-tls-certs\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331980 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-config-data\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.332076 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-scripts\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.333969 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.334538 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-logs" (OuterVolumeSpecName: "logs") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.342049 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-scripts" (OuterVolumeSpecName: "scripts") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.342319 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2c9b96-2033-4221-8667-e24507c76269-kube-api-access-nh6l7" (OuterVolumeSpecName: "kube-api-access-nh6l7") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "kube-api-access-nh6l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.371729 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.375929 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f" (OuterVolumeSpecName: "glance") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "pvc-a28b8b70-fd49-47a9-9731-34913060b77f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.405230 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-config-data" (OuterVolumeSpecName: "config-data") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.421396 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435221 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435287 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") on node \"crc\" " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435301 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435311 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435320 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435328 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh6l7\" (UniqueName: \"kubernetes.io/projected/8c2c9b96-2033-4221-8667-e24507c76269-kube-api-access-nh6l7\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435338 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435346 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.462087 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.462240 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a28b8b70-fd49-47a9-9731-34913060b77f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f") on node "crc" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.537102 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.609131 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d8b8b566d-89qjp"] Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.620157 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d8b8b566d-89qjp"] Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.238338 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c2c9b96-2033-4221-8667-e24507c76269","Type":"ContainerDied","Data":"43d735c182cbb81ec5017199eb78a2029759022896fdabfe1470a42d01bd6b7b"} Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.238643 4985 scope.go:117] "RemoveContainer" containerID="c202d2036ca2a524c7fa057270b0486dc059f15b87694a6661d8c1bd8fb91016" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.238649 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.311146 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.325992 4985 scope.go:117] "RemoveContainer" containerID="c1278cfba933f75936a9894cfaa710f2d276954aafea6a97d46314226d60c19f" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.326367 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357045 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:38:58 crc kubenswrapper[4985]: E0128 18:38:58.357513 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-log" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357527 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-log" Jan 28 18:38:58 crc kubenswrapper[4985]: E0128 18:38:58.357546 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-api" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357553 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-api" Jan 28 18:38:58 crc kubenswrapper[4985]: E0128 18:38:58.357566 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="init" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357573 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="init" Jan 28 18:38:58 crc kubenswrapper[4985]: E0128 18:38:58.357587 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-httpd" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357593 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-httpd" Jan 28 18:38:58 crc kubenswrapper[4985]: E0128 18:38:58.357612 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="dnsmasq-dns" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357618 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="dnsmasq-dns" Jan 28 18:38:58 crc kubenswrapper[4985]: E0128 18:38:58.357634 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-httpd" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357639 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-httpd" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357847 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-api" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357870 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-httpd" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357881 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="dnsmasq-dns" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357893 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-log" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357901 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-httpd" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.359054 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.362163 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.362995 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.381206 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.395526 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.193:9292/healthcheck\": read tcp 10.217.0.2:40330->10.217.0.193:9292: read: connection reset by peer" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.395737 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.193:9292/healthcheck\": read tcp 10.217.0.2:40336->10.217.0.193:9292: read: connection reset by peer" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.454312 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.454388 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.454433 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.454683 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.454897 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-logs\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.455098 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.455172 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.455321 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txbxd\" (UniqueName: \"kubernetes.io/projected/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-kube-api-access-txbxd\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.557928 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558027 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558080 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558129 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-logs\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558271 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558308 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558343 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txbxd\" (UniqueName: \"kubernetes.io/projected/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-kube-api-access-txbxd\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.559371 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-logs\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.559660 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.565885 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.566885 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.568454 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.568495 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2d6568af50c46d048a9023d9ac84db4baa0cf8b023fb9ef6c59e622b024bcc77/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.575695 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.577178 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.589445 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txbxd\" (UniqueName: \"kubernetes.io/projected/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-kube-api-access-txbxd\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.677635 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.979625 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.179551 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.371922 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2c9b96-2033-4221-8667-e24507c76269" path="/var/lib/kubelet/pods/8c2c9b96-2033-4221-8667-e24507c76269/volumes" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.376469 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" path="/var/lib/kubelet/pods/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25/volumes" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.392540 4985 generic.go:334] "Generic (PLEG): container finished" podID="183853eb-591f-4859-9824-550b76c6f115" containerID="1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951" exitCode=0 Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.392648 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"183853eb-591f-4859-9824-550b76c6f115","Type":"ContainerDied","Data":"1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951"} Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.392676 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"183853eb-591f-4859-9824-550b76c6f115","Type":"ContainerDied","Data":"3032950d6605333705d222c5cf7752eabb2ff3aa233f4490427968658cbe487f"} Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.392700 4985 scope.go:117] "RemoveContainer" containerID="1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.392704 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.456322 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-internal-tls-certs\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.456378 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-scripts\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.456414 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-logs\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.457039 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.457104 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-config-data\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.457168 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-httpd-run\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.457202 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsqtx\" (UniqueName: \"kubernetes.io/projected/183853eb-591f-4859-9824-550b76c6f115-kube-api-access-vsqtx\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.457306 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-combined-ca-bundle\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.466037 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.466853 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-logs" (OuterVolumeSpecName: "logs") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.472356 4985 scope.go:117] "RemoveContainer" containerID="824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.500135 4985 scope.go:117] "RemoveContainer" containerID="1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951" Jan 28 18:38:59 crc kubenswrapper[4985]: E0128 18:38:59.500633 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951\": container with ID starting with 1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951 not found: ID does not exist" containerID="1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.500681 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951"} err="failed to get container status \"1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951\": rpc error: code = NotFound desc = could not find container \"1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951\": container with ID starting with 1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951 not found: ID does not exist" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.500705 4985 scope.go:117] "RemoveContainer" containerID="824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c" Jan 28 18:38:59 crc kubenswrapper[4985]: E0128 18:38:59.500940 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c\": container with ID starting with 824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c not found: ID does not exist" containerID="824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.500962 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c"} err="failed to get container status \"824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c\": rpc error: code = NotFound desc = could not find container \"824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c\": container with ID starting with 824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c not found: ID does not exist" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.515542 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-scripts" (OuterVolumeSpecName: "scripts") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.524634 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/183853eb-591f-4859-9824-550b76c6f115-kube-api-access-vsqtx" (OuterVolumeSpecName: "kube-api-access-vsqtx") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "kube-api-access-vsqtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.564426 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.564465 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.564476 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.564486 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsqtx\" (UniqueName: \"kubernetes.io/projected/183853eb-591f-4859-9824-550b76c6f115-kube-api-access-vsqtx\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.783593 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc" (OuterVolumeSpecName: "glance") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "pvc-515c3b80-2464-4146-928c-cf9de6a379dc". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.832771 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.887917 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") on node \"crc\" " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.887952 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.922247 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-config-data" (OuterVolumeSpecName: "config-data") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.928281 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.990868 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.005471 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.032962 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.033418 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-515c3b80-2464-4146-928c-cf9de6a379dc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc") on node "crc" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.095115 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.095162 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.341316 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.359551 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.375100 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:39:00 crc kubenswrapper[4985]: E0128 18:39:00.375603 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-log" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.375615 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-log" Jan 28 18:39:00 crc kubenswrapper[4985]: E0128 18:39:00.375634 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-httpd" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.375640 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-httpd" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.375892 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-httpd" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.375926 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-log" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.377106 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.383938 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.384127 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.411178 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.422240 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerStarted","Data":"2588192f60378ca1092182e85a2d142272639f43f1993cca86706ccb45ce9080"} Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.422423 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-central-agent" containerID="cri-o://6264c75e309967c9f20db46eab077cb1a5ee5f417ccd8f79e08cda266fd4cda5" gracePeriod=30 Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.422499 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="proxy-httpd" containerID="cri-o://2588192f60378ca1092182e85a2d142272639f43f1993cca86706ccb45ce9080" gracePeriod=30 Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.422523 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.422533 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="sg-core" containerID="cri-o://a38360ca0387e0827a57f03126984e0a20e5b118f82925b6ad3b02f72f4d6f3b" gracePeriod=30 Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.422543 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-notification-agent" containerID="cri-o://ebfc9ea99db013235f5adee2c18ba99af05a9f8dc3abaf0616d7d804e0cb54cc" gracePeriod=30 Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.439573 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7","Type":"ContainerStarted","Data":"1624e18dccc8a03d5689dd5379b5128a85d73c1b1de90d097d616bfae8ab0542"} Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.484689 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=12.619858807 podStartE2EDuration="22.484661502s" podCreationTimestamp="2026-01-28 18:38:38 +0000 UTC" firstStartedPulling="2026-01-28 18:38:49.024509297 +0000 UTC m=+1539.851072118" lastFinishedPulling="2026-01-28 18:38:58.889311992 +0000 UTC m=+1549.715874813" observedRunningTime="2026-01-28 18:39:00.456986731 +0000 UTC m=+1551.283549552" watchObservedRunningTime="2026-01-28 18:39:00.484661502 +0000 UTC m=+1551.311224323" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.504704 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505136 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505219 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdbsj\" (UniqueName: \"kubernetes.io/projected/d7b0993c-0b43-44d7-8498-6808f2a1439e-kube-api-access-cdbsj\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505268 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b0993c-0b43-44d7-8498-6808f2a1439e-logs\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505414 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505596 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7b0993c-0b43-44d7-8498-6808f2a1439e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505644 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505721 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.607925 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.608025 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.608126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdbsj\" (UniqueName: \"kubernetes.io/projected/d7b0993c-0b43-44d7-8498-6808f2a1439e-kube-api-access-cdbsj\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.608165 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b0993c-0b43-44d7-8498-6808f2a1439e-logs\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.608198 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.608275 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7b0993c-0b43-44d7-8498-6808f2a1439e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.610356 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.610476 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.610968 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b0993c-0b43-44d7-8498-6808f2a1439e-logs\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.611203 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7b0993c-0b43-44d7-8498-6808f2a1439e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.618164 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.619039 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.622686 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.632178 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.681114 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdbsj\" (UniqueName: \"kubernetes.io/projected/d7b0993c-0b43-44d7-8498-6808f2a1439e-kube-api-access-cdbsj\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.691772 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.692067 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d04256428a5045d3b55ec61489edb632decdf9f4666f3e6952b725d307784bb2/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.873479 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.907002 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.988886 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5b5c69f9bd-9jvz9"] Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.989320 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" podUID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" containerName="heat-engine" containerID="cri-o://18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b" gracePeriod=60 Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.997887 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.346372 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="183853eb-591f-4859-9824-550b76c6f115" path="/var/lib/kubelet/pods/183853eb-591f-4859-9824-550b76c6f115/volumes" Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.508486 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1d8f391e-0ed3-4969-b61b-5b9d602644fa","Type":"ContainerStarted","Data":"1661f6106a354eeb8001c50cfed327742713be4cb739c514d329c311714e9193"} Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.540488 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerID="2588192f60378ca1092182e85a2d142272639f43f1993cca86706ccb45ce9080" exitCode=0 Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.540553 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerID="a38360ca0387e0827a57f03126984e0a20e5b118f82925b6ad3b02f72f4d6f3b" exitCode=2 Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.540590 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerDied","Data":"2588192f60378ca1092182e85a2d142272639f43f1993cca86706ccb45ce9080"} Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.540630 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerDied","Data":"a38360ca0387e0827a57f03126984e0a20e5b118f82925b6ad3b02f72f4d6f3b"} Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.545598 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.737584945 podStartE2EDuration="37.545575474s" podCreationTimestamp="2026-01-28 18:38:24 +0000 UTC" firstStartedPulling="2026-01-28 18:38:25.066631831 +0000 UTC m=+1515.893194652" lastFinishedPulling="2026-01-28 18:38:59.87462236 +0000 UTC m=+1550.701185181" observedRunningTime="2026-01-28 18:39:01.5298318 +0000 UTC m=+1552.356394621" watchObservedRunningTime="2026-01-28 18:39:01.545575474 +0000 UTC m=+1552.372138295" Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.641933 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.751366 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5c6549b6bc-9j9qm"] Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.869507 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.882300 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.142440 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:39:02 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:39:02 crc kubenswrapper[4985]: > Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.206477 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.284837 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-788f4c49c5-d7wbz"] Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.354918 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.442622 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p66bg\" (UniqueName: \"kubernetes.io/projected/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-kube-api-access-p66bg\") pod \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.442742 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-combined-ca-bundle\") pod \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.442895 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data-custom\") pod \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.443048 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data\") pod \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.461753 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-kube-api-access-p66bg" (OuterVolumeSpecName: "kube-api-access-p66bg") pod "c2d3f9ad-30d3-4e69-9229-f84c7b43b341" (UID: "c2d3f9ad-30d3-4e69-9229-f84c7b43b341"). InnerVolumeSpecName "kube-api-access-p66bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.461922 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c2d3f9ad-30d3-4e69-9229-f84c7b43b341" (UID: "c2d3f9ad-30d3-4e69-9229-f84c7b43b341"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.500684 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2d3f9ad-30d3-4e69-9229-f84c7b43b341" (UID: "c2d3f9ad-30d3-4e69-9229-f84c7b43b341"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.542054 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data" (OuterVolumeSpecName: "config-data") pod "c2d3f9ad-30d3-4e69-9229-f84c7b43b341" (UID: "c2d3f9ad-30d3-4e69-9229-f84c7b43b341"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.553001 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.553346 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.553358 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p66bg\" (UniqueName: \"kubernetes.io/projected/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-kube-api-access-p66bg\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.553371 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.582822 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.583453 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c6549b6bc-9j9qm" event={"ID":"c2d3f9ad-30d3-4e69-9229-f84c7b43b341","Type":"ContainerDied","Data":"b124dd8e680ed4c6b21bcff9be1e93e485ca3c7ce4f5a633c143c727e10e2e74"} Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.583525 4985 scope.go:117] "RemoveContainer" containerID="7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.619384 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerID="ebfc9ea99db013235f5adee2c18ba99af05a9f8dc3abaf0616d7d804e0cb54cc" exitCode=0 Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.619483 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerDied","Data":"ebfc9ea99db013235f5adee2c18ba99af05a9f8dc3abaf0616d7d804e0cb54cc"} Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.638227 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7","Type":"ContainerStarted","Data":"b81cdd66bb8c798116c98e56da7c17cc64e9b25f2282b923ca2a69fdf3290ba0"} Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.660626 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d7b0993c-0b43-44d7-8498-6808f2a1439e","Type":"ContainerStarted","Data":"d33d17fd0dd647981ed09e99c772fb63ca0e1d2f6c1edf08c85f3bb830b8d000"} Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.801075 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5c6549b6bc-9j9qm"] Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.868618 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5c6549b6bc-9j9qm"] Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.063928 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.177709 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data\") pod \"c96952df-fe61-4b70-a166-ebf0dc93bb94\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.177896 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data-custom\") pod \"c96952df-fe61-4b70-a166-ebf0dc93bb94\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.178060 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-combined-ca-bundle\") pod \"c96952df-fe61-4b70-a166-ebf0dc93bb94\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.178166 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kscsq\" (UniqueName: \"kubernetes.io/projected/c96952df-fe61-4b70-a166-ebf0dc93bb94-kube-api-access-kscsq\") pod \"c96952df-fe61-4b70-a166-ebf0dc93bb94\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.184852 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c96952df-fe61-4b70-a166-ebf0dc93bb94" (UID: "c96952df-fe61-4b70-a166-ebf0dc93bb94"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.189063 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c96952df-fe61-4b70-a166-ebf0dc93bb94-kube-api-access-kscsq" (OuterVolumeSpecName: "kube-api-access-kscsq") pod "c96952df-fe61-4b70-a166-ebf0dc93bb94" (UID: "c96952df-fe61-4b70-a166-ebf0dc93bb94"). InnerVolumeSpecName "kube-api-access-kscsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.255800 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c96952df-fe61-4b70-a166-ebf0dc93bb94" (UID: "c96952df-fe61-4b70-a166-ebf0dc93bb94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.280073 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data" (OuterVolumeSpecName: "config-data") pod "c96952df-fe61-4b70-a166-ebf0dc93bb94" (UID: "c96952df-fe61-4b70-a166-ebf0dc93bb94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.282118 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.282152 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.282167 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.282182 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kscsq\" (UniqueName: \"kubernetes.io/projected/c96952df-fe61-4b70-a166-ebf0dc93bb94-kube-api-access-kscsq\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.287882 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" path="/var/lib/kubelet/pods/c2d3f9ad-30d3-4e69-9229-f84c7b43b341/volumes" Jan 28 18:39:03 crc kubenswrapper[4985]: E0128 18:39:03.335753 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:39:03 crc kubenswrapper[4985]: E0128 18:39:03.343538 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:39:03 crc kubenswrapper[4985]: E0128 18:39:03.346572 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:39:03 crc kubenswrapper[4985]: E0128 18:39:03.346651 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" podUID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" containerName="heat-engine" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.837502 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7","Type":"ContainerStarted","Data":"bcbb77df20289a96e57c3bdab8e83977f2e8aed07c87f906ad623466ac2e0388"} Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.876492 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d7b0993c-0b43-44d7-8498-6808f2a1439e","Type":"ContainerStarted","Data":"0d8e891cef15be2548a1fc103989cfe6a80da804e12c0a1f0bb4394f9d942622"} Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.906184 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.906157489 podStartE2EDuration="5.906157489s" podCreationTimestamp="2026-01-28 18:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:03.871443299 +0000 UTC m=+1554.698006130" watchObservedRunningTime="2026-01-28 18:39:03.906157489 +0000 UTC m=+1554.732720310" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.914046 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" event={"ID":"c96952df-fe61-4b70-a166-ebf0dc93bb94","Type":"ContainerDied","Data":"81214ec8d253d3da7a8b05fb6b49e40b2d03873d9fbc8130d3d5a18dff66c068"} Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.914106 4985 scope.go:117] "RemoveContainer" containerID="abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.914301 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.977317 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-788f4c49c5-d7wbz"] Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.993974 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-788f4c49c5-d7wbz"] Jan 28 18:39:04 crc kubenswrapper[4985]: I0128 18:39:04.926559 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d7b0993c-0b43-44d7-8498-6808f2a1439e","Type":"ContainerStarted","Data":"4668d03328d8733b473a0bc4e38e872cd4c65187b388112bf05d3b58cdf0c96b"} Jan 28 18:39:04 crc kubenswrapper[4985]: I0128 18:39:04.955998 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.955980438 podStartE2EDuration="4.955980438s" podCreationTimestamp="2026-01-28 18:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:04.950161214 +0000 UTC m=+1555.776724035" watchObservedRunningTime="2026-01-28 18:39:04.955980438 +0000 UTC m=+1555.782543259" Jan 28 18:39:05 crc kubenswrapper[4985]: I0128 18:39:05.280441 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" path="/var/lib/kubelet/pods/c96952df-fe61-4b70-a166-ebf0dc93bb94/volumes" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.258728 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tq8xx"] Jan 28 18:39:06 crc kubenswrapper[4985]: E0128 18:39:06.259574 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerName="heat-cfnapi" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259592 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerName="heat-cfnapi" Jan 28 18:39:06 crc kubenswrapper[4985]: E0128 18:39:06.259619 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerName="heat-api" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259627 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerName="heat-api" Jan 28 18:39:06 crc kubenswrapper[4985]: E0128 18:39:06.259651 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerName="heat-cfnapi" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259659 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerName="heat-cfnapi" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259927 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerName="heat-cfnapi" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259942 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerName="heat-api" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259965 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerName="heat-api" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259983 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerName="heat-cfnapi" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.260997 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.282037 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tq8xx"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.356068 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f01b-account-create-update-b985r"] Jan 28 18:39:06 crc kubenswrapper[4985]: E0128 18:39:06.363974 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerName="heat-api" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.364005 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerName="heat-api" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.366860 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.375324 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f01b-account-create-update-b985r"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.375534 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.399723 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ppl\" (UniqueName: \"kubernetes.io/projected/dc08dbb5-2423-4fe9-8c21-a668459cad74-kube-api-access-22ppl\") pod \"nova-api-f01b-account-create-update-b985r\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.399820 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc08dbb5-2423-4fe9-8c21-a668459cad74-operator-scripts\") pod \"nova-api-f01b-account-create-update-b985r\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.400625 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-operator-scripts\") pod \"nova-api-db-create-tq8xx\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.400817 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghxll\" (UniqueName: \"kubernetes.io/projected/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-kube-api-access-ghxll\") pod \"nova-api-db-create-tq8xx\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.462834 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jqvzw"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.465205 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.502745 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-operator-scripts\") pod \"nova-cell0-db-create-jqvzw\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.502904 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-operator-scripts\") pod \"nova-api-db-create-tq8xx\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.503005 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghxll\" (UniqueName: \"kubernetes.io/projected/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-kube-api-access-ghxll\") pod \"nova-api-db-create-tq8xx\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.503051 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22ppl\" (UniqueName: \"kubernetes.io/projected/dc08dbb5-2423-4fe9-8c21-a668459cad74-kube-api-access-22ppl\") pod \"nova-api-f01b-account-create-update-b985r\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.503108 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc08dbb5-2423-4fe9-8c21-a668459cad74-operator-scripts\") pod \"nova-api-f01b-account-create-update-b985r\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.503149 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbv49\" (UniqueName: \"kubernetes.io/projected/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-kube-api-access-fbv49\") pod \"nova-cell0-db-create-jqvzw\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.504060 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-operator-scripts\") pod \"nova-api-db-create-tq8xx\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.505063 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc08dbb5-2423-4fe9-8c21-a668459cad74-operator-scripts\") pod \"nova-api-f01b-account-create-update-b985r\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.508367 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jqvzw"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.532096 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22ppl\" (UniqueName: \"kubernetes.io/projected/dc08dbb5-2423-4fe9-8c21-a668459cad74-kube-api-access-22ppl\") pod \"nova-api-f01b-account-create-update-b985r\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.533773 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghxll\" (UniqueName: \"kubernetes.io/projected/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-kube-api-access-ghxll\") pod \"nova-api-db-create-tq8xx\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.563086 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mzbqq"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.565132 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.581224 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b80b-account-create-update-mrvzq"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.583156 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.584105 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.600728 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.608061 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-operator-scripts\") pod \"nova-cell0-db-create-jqvzw\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.608235 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrdwh\" (UniqueName: \"kubernetes.io/projected/52f84c63-5719-4c32-bbc7-d7960fe35d35-kube-api-access-xrdwh\") pod \"nova-cell1-db-create-mzbqq\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.608344 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbv49\" (UniqueName: \"kubernetes.io/projected/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-kube-api-access-fbv49\") pod \"nova-cell0-db-create-jqvzw\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.608441 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f84c63-5719-4c32-bbc7-d7960fe35d35-operator-scripts\") pod \"nova-cell1-db-create-mzbqq\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.608944 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-operator-scripts\") pod \"nova-cell0-db-create-jqvzw\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.636519 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbv49\" (UniqueName: \"kubernetes.io/projected/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-kube-api-access-fbv49\") pod \"nova-cell0-db-create-jqvzw\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.640369 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mzbqq"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.655977 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b80b-account-create-update-mrvzq"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.711207 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4efe2ca-1bc9-40db-944e-fb86222e4f98-operator-scripts\") pod \"nova-cell0-b80b-account-create-update-mrvzq\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.711303 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4f6c\" (UniqueName: \"kubernetes.io/projected/b4efe2ca-1bc9-40db-944e-fb86222e4f98-kube-api-access-q4f6c\") pod \"nova-cell0-b80b-account-create-update-mrvzq\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.711448 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrdwh\" (UniqueName: \"kubernetes.io/projected/52f84c63-5719-4c32-bbc7-d7960fe35d35-kube-api-access-xrdwh\") pod \"nova-cell1-db-create-mzbqq\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.711533 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f84c63-5719-4c32-bbc7-d7960fe35d35-operator-scripts\") pod \"nova-cell1-db-create-mzbqq\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.717167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.718779 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f84c63-5719-4c32-bbc7-d7960fe35d35-operator-scripts\") pod \"nova-cell1-db-create-mzbqq\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.745674 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrdwh\" (UniqueName: \"kubernetes.io/projected/52f84c63-5719-4c32-bbc7-d7960fe35d35-kube-api-access-xrdwh\") pod \"nova-cell1-db-create-mzbqq\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.769045 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7b9a-account-create-update-l47bt"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.770508 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.772469 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.787020 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.788131 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7b9a-account-create-update-l47bt"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.814096 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4f6c\" (UniqueName: \"kubernetes.io/projected/b4efe2ca-1bc9-40db-944e-fb86222e4f98-kube-api-access-q4f6c\") pod \"nova-cell0-b80b-account-create-update-mrvzq\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.814224 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dclxs\" (UniqueName: \"kubernetes.io/projected/75ac3925-bebe-4c63-999f-073386005723-kube-api-access-dclxs\") pod \"nova-cell1-7b9a-account-create-update-l47bt\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.814442 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4efe2ca-1bc9-40db-944e-fb86222e4f98-operator-scripts\") pod \"nova-cell0-b80b-account-create-update-mrvzq\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.814478 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ac3925-bebe-4c63-999f-073386005723-operator-scripts\") pod \"nova-cell1-7b9a-account-create-update-l47bt\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.819702 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4efe2ca-1bc9-40db-944e-fb86222e4f98-operator-scripts\") pod \"nova-cell0-b80b-account-create-update-mrvzq\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.834997 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4f6c\" (UniqueName: \"kubernetes.io/projected/b4efe2ca-1bc9-40db-944e-fb86222e4f98-kube-api-access-q4f6c\") pod \"nova-cell0-b80b-account-create-update-mrvzq\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.861729 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.874944 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.917869 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ac3925-bebe-4c63-999f-073386005723-operator-scripts\") pod \"nova-cell1-7b9a-account-create-update-l47bt\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.918380 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dclxs\" (UniqueName: \"kubernetes.io/projected/75ac3925-bebe-4c63-999f-073386005723-kube-api-access-dclxs\") pod \"nova-cell1-7b9a-account-create-update-l47bt\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.918877 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ac3925-bebe-4c63-999f-073386005723-operator-scripts\") pod \"nova-cell1-7b9a-account-create-update-l47bt\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.954201 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dclxs\" (UniqueName: \"kubernetes.io/projected/75ac3925-bebe-4c63-999f-073386005723-kube-api-access-dclxs\") pod \"nova-cell1-7b9a-account-create-update-l47bt\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:07 crc kubenswrapper[4985]: I0128 18:39:07.204513 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:07 crc kubenswrapper[4985]: I0128 18:39:07.231663 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tq8xx"] Jan 28 18:39:07 crc kubenswrapper[4985]: W0128 18:39:07.498540 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc08dbb5_2423_4fe9_8c21_a668459cad74.slice/crio-2f7100e35d20ce823fe4fe7825216761e75e5f418f773220ca819bd86ab62de6 WatchSource:0}: Error finding container 2f7100e35d20ce823fe4fe7825216761e75e5f418f773220ca819bd86ab62de6: Status 404 returned error can't find the container with id 2f7100e35d20ce823fe4fe7825216761e75e5f418f773220ca819bd86ab62de6 Jan 28 18:39:07 crc kubenswrapper[4985]: I0128 18:39:07.522588 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f01b-account-create-update-b985r"] Jan 28 18:39:07 crc kubenswrapper[4985]: I0128 18:39:07.827332 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mzbqq"] Jan 28 18:39:07 crc kubenswrapper[4985]: I0128 18:39:07.902344 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b80b-account-create-update-mrvzq"] Jan 28 18:39:07 crc kubenswrapper[4985]: I0128 18:39:07.934303 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jqvzw"] Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.017989 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7b9a-account-create-update-l47bt"] Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.018770 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" event={"ID":"b4efe2ca-1bc9-40db-944e-fb86222e4f98","Type":"ContainerStarted","Data":"416cc2721f188704e4b7cf003f51e6d2dd0f4f7385c280dfd7b1160d868cf686"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.026572 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mzbqq" event={"ID":"52f84c63-5719-4c32-bbc7-d7960fe35d35","Type":"ContainerStarted","Data":"2385680eb64658fe07f8aa3ec16ec314498bd3d6feafc53834fb4c2d568c94ea"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.033000 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f01b-account-create-update-b985r" event={"ID":"dc08dbb5-2423-4fe9-8c21-a668459cad74","Type":"ContainerStarted","Data":"c45d2c9f516bceabb6c91c348f68e974205ef1034563c42f6346b513ae9f2b4e"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.033046 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f01b-account-create-update-b985r" event={"ID":"dc08dbb5-2423-4fe9-8c21-a668459cad74","Type":"ContainerStarted","Data":"2f7100e35d20ce823fe4fe7825216761e75e5f418f773220ca819bd86ab62de6"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.043054 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tq8xx" event={"ID":"dc09e699-e5ce-4e02-b3ae-ce43d120e70d","Type":"ContainerStarted","Data":"6a970a7bb0cf6a6924c094b8183cf37c24dab48878e09e30bf62063b33da4241"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.043100 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tq8xx" event={"ID":"dc09e699-e5ce-4e02-b3ae-ce43d120e70d","Type":"ContainerStarted","Data":"dcf8630afc437b357fee41d6f6f5be42746432e209b3afa7319f44eff59c5a8e"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.051334 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jqvzw" event={"ID":"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae","Type":"ContainerStarted","Data":"a68b313338833953d1d9cc02ae7888a7ecbd0081546779d13fd6e917a1c90e05"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.063764 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-f01b-account-create-update-b985r" podStartSLOduration=2.063744507 podStartE2EDuration="2.063744507s" podCreationTimestamp="2026-01-28 18:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:08.051722478 +0000 UTC m=+1558.878285299" watchObservedRunningTime="2026-01-28 18:39:08.063744507 +0000 UTC m=+1558.890307328" Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.083399 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-tq8xx" podStartSLOduration=2.083375891 podStartE2EDuration="2.083375891s" podCreationTimestamp="2026-01-28 18:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:08.073187784 +0000 UTC m=+1558.899750615" watchObservedRunningTime="2026-01-28 18:39:08.083375891 +0000 UTC m=+1558.909938712" Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.981050 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.981608 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.034999 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.039945 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.081123 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jqvzw" event={"ID":"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae","Type":"ContainerStarted","Data":"4bc3d7f5e4e6dada67f4a141ee7828a9a6e0f2e232ee13a2c55fb56665c8dcf7"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.100694 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerID="6264c75e309967c9f20db46eab077cb1a5ee5f417ccd8f79e08cda266fd4cda5" exitCode=0 Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.100769 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerDied","Data":"6264c75e309967c9f20db46eab077cb1a5ee5f417ccd8f79e08cda266fd4cda5"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.100803 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerDied","Data":"831d830f0ce8de8c61fae9ceebb6944114447b863f9b44abf86e65cce9b70907"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.100814 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="831d830f0ce8de8c61fae9ceebb6944114447b863f9b44abf86e65cce9b70907" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.113578 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" event={"ID":"b4efe2ca-1bc9-40db-944e-fb86222e4f98","Type":"ContainerStarted","Data":"93175a518881e892d15535448d5c38da897596006be51be39132a6908ffae666"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.118206 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-jqvzw" podStartSLOduration=3.118182096 podStartE2EDuration="3.118182096s" podCreationTimestamp="2026-01-28 18:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:09.109655815 +0000 UTC m=+1559.936218636" watchObservedRunningTime="2026-01-28 18:39:09.118182096 +0000 UTC m=+1559.944744937" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.129693 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" event={"ID":"75ac3925-bebe-4c63-999f-073386005723","Type":"ContainerStarted","Data":"c2b4778aba3ad4aab0ffc010a57b2670dae7de8ea4b986e78468cc76f9181467"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.129756 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" event={"ID":"75ac3925-bebe-4c63-999f-073386005723","Type":"ContainerStarted","Data":"9ccd623fbd6d8642ac8136c5acacb7e7c9cc2077b957698537cbb98c6ec3d29f"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.139538 4985 generic.go:334] "Generic (PLEG): container finished" podID="52f84c63-5719-4c32-bbc7-d7960fe35d35" containerID="d941727c28e1382267609d1ceda76e73a9f2d9cd3d596bc04e5cda672a1166cb" exitCode=0 Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.139628 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mzbqq" event={"ID":"52f84c63-5719-4c32-bbc7-d7960fe35d35","Type":"ContainerDied","Data":"d941727c28e1382267609d1ceda76e73a9f2d9cd3d596bc04e5cda672a1166cb"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.143118 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" podStartSLOduration=3.143095849 podStartE2EDuration="3.143095849s" podCreationTimestamp="2026-01-28 18:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:09.133122437 +0000 UTC m=+1559.959685288" watchObservedRunningTime="2026-01-28 18:39:09.143095849 +0000 UTC m=+1559.969658670" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.156517 4985 generic.go:334] "Generic (PLEG): container finished" podID="dc09e699-e5ce-4e02-b3ae-ce43d120e70d" containerID="6a970a7bb0cf6a6924c094b8183cf37c24dab48878e09e30bf62063b33da4241" exitCode=0 Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.156709 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tq8xx" event={"ID":"dc09e699-e5ce-4e02-b3ae-ce43d120e70d","Type":"ContainerDied","Data":"6a970a7bb0cf6a6924c094b8183cf37c24dab48878e09e30bf62063b33da4241"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.157443 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.157464 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.178365 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" podStartSLOduration=3.178347664 podStartE2EDuration="3.178347664s" podCreationTimestamp="2026-01-28 18:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:09.153613626 +0000 UTC m=+1559.980176437" watchObservedRunningTime="2026-01-28 18:39:09.178347664 +0000 UTC m=+1560.004910485" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.320580 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323143 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-combined-ca-bundle\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323193 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbqvh\" (UniqueName: \"kubernetes.io/projected/fe11ac1b-2633-40fd-b359-01d3309299a8-kube-api-access-qbqvh\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323416 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-log-httpd\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323436 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-config-data\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323468 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-scripts\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323532 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-run-httpd\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323594 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-sg-core-conf-yaml\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.335106 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.339541 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe11ac1b-2633-40fd-b359-01d3309299a8-kube-api-access-qbqvh" (OuterVolumeSpecName: "kube-api-access-qbqvh") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "kube-api-access-qbqvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.339947 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.354373 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-scripts" (OuterVolumeSpecName: "scripts") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.421451 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.426732 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbqvh\" (UniqueName: \"kubernetes.io/projected/fe11ac1b-2633-40fd-b359-01d3309299a8-kube-api-access-qbqvh\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.426774 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.426787 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.426799 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.426809 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.494034 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.525343 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-config-data" (OuterVolumeSpecName: "config-data") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.529372 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.529404 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.192020 4985 generic.go:334] "Generic (PLEG): container finished" podID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" containerID="18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b" exitCode=0 Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.192118 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" event={"ID":"0db5c7c8-1c53-42d0-8e23-f1cba882d552","Type":"ContainerDied","Data":"18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b"} Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.202070 4985 generic.go:334] "Generic (PLEG): container finished" podID="b4efe2ca-1bc9-40db-944e-fb86222e4f98" containerID="93175a518881e892d15535448d5c38da897596006be51be39132a6908ffae666" exitCode=0 Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.202131 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" event={"ID":"b4efe2ca-1bc9-40db-944e-fb86222e4f98","Type":"ContainerDied","Data":"93175a518881e892d15535448d5c38da897596006be51be39132a6908ffae666"} Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.206146 4985 generic.go:334] "Generic (PLEG): container finished" podID="75ac3925-bebe-4c63-999f-073386005723" containerID="c2b4778aba3ad4aab0ffc010a57b2670dae7de8ea4b986e78468cc76f9181467" exitCode=0 Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.206220 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" event={"ID":"75ac3925-bebe-4c63-999f-073386005723","Type":"ContainerDied","Data":"c2b4778aba3ad4aab0ffc010a57b2670dae7de8ea4b986e78468cc76f9181467"} Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.208735 4985 generic.go:334] "Generic (PLEG): container finished" podID="dc08dbb5-2423-4fe9-8c21-a668459cad74" containerID="c45d2c9f516bceabb6c91c348f68e974205ef1034563c42f6346b513ae9f2b4e" exitCode=0 Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.208789 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f01b-account-create-update-b985r" event={"ID":"dc08dbb5-2423-4fe9-8c21-a668459cad74","Type":"ContainerDied","Data":"c45d2c9f516bceabb6c91c348f68e974205ef1034563c42f6346b513ae9f2b4e"} Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.214722 4985 generic.go:334] "Generic (PLEG): container finished" podID="253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" containerID="4bc3d7f5e4e6dada67f4a141ee7828a9a6e0f2e232ee13a2c55fb56665c8dcf7" exitCode=0 Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.214878 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.214887 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jqvzw" event={"ID":"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae","Type":"ContainerDied","Data":"4bc3d7f5e4e6dada67f4a141ee7828a9a6e0f2e232ee13a2c55fb56665c8dcf7"} Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.412096 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.431622 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.445928 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:10 crc kubenswrapper[4985]: E0128 18:39:10.453629 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="sg-core" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.453687 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="sg-core" Jan 28 18:39:10 crc kubenswrapper[4985]: E0128 18:39:10.453717 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-central-agent" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.453723 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-central-agent" Jan 28 18:39:10 crc kubenswrapper[4985]: E0128 18:39:10.453734 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="proxy-httpd" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.453739 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="proxy-httpd" Jan 28 18:39:10 crc kubenswrapper[4985]: E0128 18:39:10.453756 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-notification-agent" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.453762 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-notification-agent" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.454590 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="sg-core" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.454614 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-central-agent" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.454627 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="proxy-httpd" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.454641 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-notification-agent" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.456665 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.456779 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.460978 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.468781 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.489236 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-run-httpd\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.489445 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9frh\" (UniqueName: \"kubernetes.io/projected/f65f780c-a6a6-4e63-a21c-962724bb8c56-kube-api-access-k9frh\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.489470 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-config-data\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.489568 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-scripts\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.489839 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.490193 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-log-httpd\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.490286 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594061 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-log-httpd\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594336 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594383 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-run-httpd\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594446 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9frh\" (UniqueName: \"kubernetes.io/projected/f65f780c-a6a6-4e63-a21c-962724bb8c56-kube-api-access-k9frh\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594465 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-config-data\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594507 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-scripts\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594534 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.605240 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.605580 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.610406 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-config-data\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.615931 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-scripts\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.639652 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-log-httpd\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.640820 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-run-httpd\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.647190 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9frh\" (UniqueName: \"kubernetes.io/projected/f65f780c-a6a6-4e63-a21c-962724bb8c56-kube-api-access-k9frh\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.785459 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.806319 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.812981 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.901120 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkrx6\" (UniqueName: \"kubernetes.io/projected/0db5c7c8-1c53-42d0-8e23-f1cba882d552-kube-api-access-tkrx6\") pod \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.901166 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data\") pod \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.901209 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghxll\" (UniqueName: \"kubernetes.io/projected/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-kube-api-access-ghxll\") pod \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.901273 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-combined-ca-bundle\") pod \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.901341 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-operator-scripts\") pod \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.901433 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data-custom\") pod \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.903784 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc09e699-e5ce-4e02-b3ae-ce43d120e70d" (UID: "dc09e699-e5ce-4e02-b3ae-ce43d120e70d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.907668 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0db5c7c8-1c53-42d0-8e23-f1cba882d552" (UID: "0db5c7c8-1c53-42d0-8e23-f1cba882d552"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.907749 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-kube-api-access-ghxll" (OuterVolumeSpecName: "kube-api-access-ghxll") pod "dc09e699-e5ce-4e02-b3ae-ce43d120e70d" (UID: "dc09e699-e5ce-4e02-b3ae-ce43d120e70d"). InnerVolumeSpecName "kube-api-access-ghxll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.910433 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db5c7c8-1c53-42d0-8e23-f1cba882d552-kube-api-access-tkrx6" (OuterVolumeSpecName: "kube-api-access-tkrx6") pod "0db5c7c8-1c53-42d0-8e23-f1cba882d552" (UID: "0db5c7c8-1c53-42d0-8e23-f1cba882d552"). InnerVolumeSpecName "kube-api-access-tkrx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.940517 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0db5c7c8-1c53-42d0-8e23-f1cba882d552" (UID: "0db5c7c8-1c53-42d0-8e23-f1cba882d552"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.958747 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.980080 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data" (OuterVolumeSpecName: "config-data") pod "0db5c7c8-1c53-42d0-8e23-f1cba882d552" (UID: "0db5c7c8-1c53-42d0-8e23-f1cba882d552"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.999561 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.999602 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.003327 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f84c63-5719-4c32-bbc7-d7960fe35d35-operator-scripts\") pod \"52f84c63-5719-4c32-bbc7-d7960fe35d35\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.003576 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrdwh\" (UniqueName: \"kubernetes.io/projected/52f84c63-5719-4c32-bbc7-d7960fe35d35-kube-api-access-xrdwh\") pod \"52f84c63-5719-4c32-bbc7-d7960fe35d35\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.003940 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f84c63-5719-4c32-bbc7-d7960fe35d35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52f84c63-5719-4c32-bbc7-d7960fe35d35" (UID: "52f84c63-5719-4c32-bbc7-d7960fe35d35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004483 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004507 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkrx6\" (UniqueName: \"kubernetes.io/projected/0db5c7c8-1c53-42d0-8e23-f1cba882d552-kube-api-access-tkrx6\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004659 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004675 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghxll\" (UniqueName: \"kubernetes.io/projected/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-kube-api-access-ghxll\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004685 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004697 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004707 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f84c63-5719-4c32-bbc7-d7960fe35d35-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.007123 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f84c63-5719-4c32-bbc7-d7960fe35d35-kube-api-access-xrdwh" (OuterVolumeSpecName: "kube-api-access-xrdwh") pod "52f84c63-5719-4c32-bbc7-d7960fe35d35" (UID: "52f84c63-5719-4c32-bbc7-d7960fe35d35"). InnerVolumeSpecName "kube-api-access-xrdwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.046544 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.049779 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.106003 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrdwh\" (UniqueName: \"kubernetes.io/projected/52f84c63-5719-4c32-bbc7-d7960fe35d35-kube-api-access-xrdwh\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.186413 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.186468 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.231525 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" event={"ID":"0db5c7c8-1c53-42d0-8e23-f1cba882d552","Type":"ContainerDied","Data":"2e057514ac41ec70a53f671ee0d42894f4f84f59f4823dfd07fa681695ec78b8"} Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.231898 4985 scope.go:117] "RemoveContainer" containerID="18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.231563 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.233527 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mzbqq" event={"ID":"52f84c63-5719-4c32-bbc7-d7960fe35d35","Type":"ContainerDied","Data":"2385680eb64658fe07f8aa3ec16ec314498bd3d6feafc53834fb4c2d568c94ea"} Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.233558 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2385680eb64658fe07f8aa3ec16ec314498bd3d6feafc53834fb4c2d568c94ea" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.233604 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.245593 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.245874 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tq8xx" event={"ID":"dc09e699-e5ce-4e02-b3ae-ce43d120e70d","Type":"ContainerDied","Data":"dcf8630afc437b357fee41d6f6f5be42746432e209b3afa7319f44eff59c5a8e"} Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.245942 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcf8630afc437b357fee41d6f6f5be42746432e209b3afa7319f44eff59c5a8e" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.249668 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.249709 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.301048 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" path="/var/lib/kubelet/pods/fe11ac1b-2633-40fd-b359-01d3309299a8/volumes" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.358315 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5b5c69f9bd-9jvz9"] Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.399725 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.415049 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5b5c69f9bd-9jvz9"] Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.574638 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.720216 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbv49\" (UniqueName: \"kubernetes.io/projected/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-kube-api-access-fbv49\") pod \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.722145 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-operator-scripts\") pod \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.738062 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" (UID: "253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.778178 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-kube-api-access-fbv49" (OuterVolumeSpecName: "kube-api-access-fbv49") pod "253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" (UID: "253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae"). InnerVolumeSpecName "kube-api-access-fbv49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.845048 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbv49\" (UniqueName: \"kubernetes.io/projected/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-kube-api-access-fbv49\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.845129 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.056183 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.078586 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.080357 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.114036 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:39:12 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:39:12 crc kubenswrapper[4985]: > Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.152342 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc08dbb5-2423-4fe9-8c21-a668459cad74-operator-scripts\") pod \"dc08dbb5-2423-4fe9-8c21-a668459cad74\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.152393 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4f6c\" (UniqueName: \"kubernetes.io/projected/b4efe2ca-1bc9-40db-944e-fb86222e4f98-kube-api-access-q4f6c\") pod \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.152424 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4efe2ca-1bc9-40db-944e-fb86222e4f98-operator-scripts\") pod \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.152558 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dclxs\" (UniqueName: \"kubernetes.io/projected/75ac3925-bebe-4c63-999f-073386005723-kube-api-access-dclxs\") pod \"75ac3925-bebe-4c63-999f-073386005723\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.152633 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22ppl\" (UniqueName: \"kubernetes.io/projected/dc08dbb5-2423-4fe9-8c21-a668459cad74-kube-api-access-22ppl\") pod \"dc08dbb5-2423-4fe9-8c21-a668459cad74\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.152804 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ac3925-bebe-4c63-999f-073386005723-operator-scripts\") pod \"75ac3925-bebe-4c63-999f-073386005723\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.154629 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4efe2ca-1bc9-40db-944e-fb86222e4f98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4efe2ca-1bc9-40db-944e-fb86222e4f98" (UID: "b4efe2ca-1bc9-40db-944e-fb86222e4f98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.155617 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ac3925-bebe-4c63-999f-073386005723-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75ac3925-bebe-4c63-999f-073386005723" (UID: "75ac3925-bebe-4c63-999f-073386005723"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.157501 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc08dbb5-2423-4fe9-8c21-a668459cad74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc08dbb5-2423-4fe9-8c21-a668459cad74" (UID: "dc08dbb5-2423-4fe9-8c21-a668459cad74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.161780 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc08dbb5-2423-4fe9-8c21-a668459cad74-kube-api-access-22ppl" (OuterVolumeSpecName: "kube-api-access-22ppl") pod "dc08dbb5-2423-4fe9-8c21-a668459cad74" (UID: "dc08dbb5-2423-4fe9-8c21-a668459cad74"). InnerVolumeSpecName "kube-api-access-22ppl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.162026 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4efe2ca-1bc9-40db-944e-fb86222e4f98-kube-api-access-q4f6c" (OuterVolumeSpecName: "kube-api-access-q4f6c") pod "b4efe2ca-1bc9-40db-944e-fb86222e4f98" (UID: "b4efe2ca-1bc9-40db-944e-fb86222e4f98"). InnerVolumeSpecName "kube-api-access-q4f6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.169353 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ac3925-bebe-4c63-999f-073386005723-kube-api-access-dclxs" (OuterVolumeSpecName: "kube-api-access-dclxs") pod "75ac3925-bebe-4c63-999f-073386005723" (UID: "75ac3925-bebe-4c63-999f-073386005723"). InnerVolumeSpecName "kube-api-access-dclxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.256816 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc08dbb5-2423-4fe9-8c21-a668459cad74-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.257759 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4f6c\" (UniqueName: \"kubernetes.io/projected/b4efe2ca-1bc9-40db-944e-fb86222e4f98-kube-api-access-q4f6c\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.257830 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4efe2ca-1bc9-40db-944e-fb86222e4f98-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.257891 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dclxs\" (UniqueName: \"kubernetes.io/projected/75ac3925-bebe-4c63-999f-073386005723-kube-api-access-dclxs\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.257965 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22ppl\" (UniqueName: \"kubernetes.io/projected/dc08dbb5-2423-4fe9-8c21-a668459cad74-kube-api-access-22ppl\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.258065 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ac3925-bebe-4c63-999f-073386005723-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.270950 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" event={"ID":"b4efe2ca-1bc9-40db-944e-fb86222e4f98","Type":"ContainerDied","Data":"416cc2721f188704e4b7cf003f51e6d2dd0f4f7385c280dfd7b1160d868cf686"} Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.270989 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="416cc2721f188704e4b7cf003f51e6d2dd0f4f7385c280dfd7b1160d868cf686" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.271047 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.279841 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.280859 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" event={"ID":"75ac3925-bebe-4c63-999f-073386005723","Type":"ContainerDied","Data":"9ccd623fbd6d8642ac8136c5acacb7e7c9cc2077b957698537cbb98c6ec3d29f"} Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.280933 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ccd623fbd6d8642ac8136c5acacb7e7c9cc2077b957698537cbb98c6ec3d29f" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.285990 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f01b-account-create-update-b985r" event={"ID":"dc08dbb5-2423-4fe9-8c21-a668459cad74","Type":"ContainerDied","Data":"2f7100e35d20ce823fe4fe7825216761e75e5f418f773220ca819bd86ab62de6"} Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.286046 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f7100e35d20ce823fe4fe7825216761e75e5f418f773220ca819bd86ab62de6" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.286116 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.292880 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jqvzw" event={"ID":"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae","Type":"ContainerDied","Data":"a68b313338833953d1d9cc02ae7888a7ecbd0081546779d13fd6e917a1c90e05"} Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.292927 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a68b313338833953d1d9cc02ae7888a7ecbd0081546779d13fd6e917a1c90e05" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.293019 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.314631 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerStarted","Data":"d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a"} Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.314902 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerStarted","Data":"426d361783f148b2f6c2b7e23079a36d36f18ddb17a5125f59aee3cbdab7bba2"} Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.138013 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.138646 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.144396 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.297605 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" path="/var/lib/kubelet/pods/0db5c7c8-1c53-42d0-8e23-f1cba882d552/volumes" Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.345611 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.345636 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.345631 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerStarted","Data":"f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531"} Jan 28 18:39:14 crc kubenswrapper[4985]: I0128 18:39:14.135712 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:14 crc kubenswrapper[4985]: I0128 18:39:14.380434 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:39:14 crc kubenswrapper[4985]: I0128 18:39:14.381571 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerStarted","Data":"5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b"} Jan 28 18:39:14 crc kubenswrapper[4985]: I0128 18:39:14.703690 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.125115 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.405309 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerStarted","Data":"ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843"} Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.406827 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.425688 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5541913320000003 podStartE2EDuration="6.425670563s" podCreationTimestamp="2026-01-28 18:39:10 +0000 UTC" firstStartedPulling="2026-01-28 18:39:11.44083546 +0000 UTC m=+1562.267398281" lastFinishedPulling="2026-01-28 18:39:15.312314691 +0000 UTC m=+1566.138877512" observedRunningTime="2026-01-28 18:39:16.421397112 +0000 UTC m=+1567.247959933" watchObservedRunningTime="2026-01-28 18:39:16.425670563 +0000 UTC m=+1567.252233384" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.973143 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wnljz"] Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979164 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc08dbb5-2423-4fe9-8c21-a668459cad74" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979192 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc08dbb5-2423-4fe9-8c21-a668459cad74" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979217 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc09e699-e5ce-4e02-b3ae-ce43d120e70d" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979225 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc09e699-e5ce-4e02-b3ae-ce43d120e70d" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979236 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" containerName="heat-engine" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979259 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" containerName="heat-engine" Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979272 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4efe2ca-1bc9-40db-944e-fb86222e4f98" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979280 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4efe2ca-1bc9-40db-944e-fb86222e4f98" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979309 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979317 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979334 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ac3925-bebe-4c63-999f-073386005723" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979340 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ac3925-bebe-4c63-999f-073386005723" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979353 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f84c63-5719-4c32-bbc7-d7960fe35d35" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979360 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f84c63-5719-4c32-bbc7-d7960fe35d35" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979652 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979679 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc08dbb5-2423-4fe9-8c21-a668459cad74" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979695 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" containerName="heat-engine" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979708 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f84c63-5719-4c32-bbc7-d7960fe35d35" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979723 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ac3925-bebe-4c63-999f-073386005723" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979744 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4efe2ca-1bc9-40db-944e-fb86222e4f98" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979756 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc09e699-e5ce-4e02-b3ae-ce43d120e70d" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.980778 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.984703 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5bk5t" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.984897 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.985008 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.025492 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wnljz"] Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.086955 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-config-data\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.087047 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.087438 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-scripts\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.087721 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gpjb\" (UniqueName: \"kubernetes.io/projected/df5e9657-f657-4f0e-9d46-31c6942e70d2-kube-api-access-8gpjb\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.189896 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gpjb\" (UniqueName: \"kubernetes.io/projected/df5e9657-f657-4f0e-9d46-31c6942e70d2-kube-api-access-8gpjb\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.189989 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-config-data\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.190034 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.190134 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-scripts\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.197214 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.204038 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-config-data\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.208731 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-scripts\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.211581 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gpjb\" (UniqueName: \"kubernetes.io/projected/df5e9657-f657-4f0e-9d46-31c6942e70d2-kube-api-access-8gpjb\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.329792 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.420746 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-central-agent" containerID="cri-o://d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a" gracePeriod=30 Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.421330 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-notification-agent" containerID="cri-o://f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531" gracePeriod=30 Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.421370 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="proxy-httpd" containerID="cri-o://ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843" gracePeriod=30 Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.421425 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="sg-core" containerID="cri-o://5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b" gracePeriod=30 Jan 28 18:39:18 crc kubenswrapper[4985]: W0128 18:39:18.000717 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf5e9657_f657_4f0e_9d46_31c6942e70d2.slice/crio-7c964c71fbf53a73e02f741a55147e78ae61c3acf98bc98cef2fafebf5b6d13a WatchSource:0}: Error finding container 7c964c71fbf53a73e02f741a55147e78ae61c3acf98bc98cef2fafebf5b6d13a: Status 404 returned error can't find the container with id 7c964c71fbf53a73e02f741a55147e78ae61c3acf98bc98cef2fafebf5b6d13a Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.006151 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wnljz"] Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.431716 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wnljz" event={"ID":"df5e9657-f657-4f0e-9d46-31c6942e70d2","Type":"ContainerStarted","Data":"7c964c71fbf53a73e02f741a55147e78ae61c3acf98bc98cef2fafebf5b6d13a"} Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.434489 4985 generic.go:334] "Generic (PLEG): container finished" podID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerID="ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843" exitCode=0 Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.434540 4985 generic.go:334] "Generic (PLEG): container finished" podID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerID="5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b" exitCode=2 Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.434531 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerDied","Data":"ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843"} Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.434613 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerDied","Data":"5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b"} Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.434627 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerDied","Data":"f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531"} Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.434550 4985 generic.go:334] "Generic (PLEG): container finished" podID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerID="f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531" exitCode=0 Jan 28 18:39:21 crc kubenswrapper[4985]: I0128 18:39:21.118407 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:39:21 crc kubenswrapper[4985]: I0128 18:39:21.177757 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:39:21 crc kubenswrapper[4985]: I0128 18:39:21.532900 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbtp6"] Jan 28 18:39:22 crc kubenswrapper[4985]: I0128 18:39:22.480227 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" containerID="cri-o://fce548919236fde4eb5c4991efb646d47ab79f3a48995a81bc461b9b6f0a9077" gracePeriod=2 Jan 28 18:39:23 crc kubenswrapper[4985]: I0128 18:39:23.493881 4985 generic.go:334] "Generic (PLEG): container finished" podID="1ebe025a-cece-4723-928f-b6649ea27040" containerID="fce548919236fde4eb5c4991efb646d47ab79f3a48995a81bc461b9b6f0a9077" exitCode=0 Jan 28 18:39:23 crc kubenswrapper[4985]: I0128 18:39:23.494351 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerDied","Data":"fce548919236fde4eb5c4991efb646d47ab79f3a48995a81bc461b9b6f0a9077"} Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.244513 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.363720 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-utilities\") pod \"1ebe025a-cece-4723-928f-b6649ea27040\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.363808 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qll99\" (UniqueName: \"kubernetes.io/projected/1ebe025a-cece-4723-928f-b6649ea27040-kube-api-access-qll99\") pod \"1ebe025a-cece-4723-928f-b6649ea27040\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.363894 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-catalog-content\") pod \"1ebe025a-cece-4723-928f-b6649ea27040\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.364336 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-utilities" (OuterVolumeSpecName: "utilities") pod "1ebe025a-cece-4723-928f-b6649ea27040" (UID: "1ebe025a-cece-4723-928f-b6649ea27040"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.367878 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.373013 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ebe025a-cece-4723-928f-b6649ea27040-kube-api-access-qll99" (OuterVolumeSpecName: "kube-api-access-qll99") pod "1ebe025a-cece-4723-928f-b6649ea27040" (UID: "1ebe025a-cece-4723-928f-b6649ea27040"). InnerVolumeSpecName "kube-api-access-qll99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.411570 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.471142 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qll99\" (UniqueName: \"kubernetes.io/projected/1ebe025a-cece-4723-928f-b6649ea27040-kube-api-access-qll99\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.510570 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ebe025a-cece-4723-928f-b6649ea27040" (UID: "1ebe025a-cece-4723-928f-b6649ea27040"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.547012 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wnljz" event={"ID":"df5e9657-f657-4f0e-9d46-31c6942e70d2","Type":"ContainerStarted","Data":"ea52163bdf8a3e8c42d7f0dbeffc6baafb9ed87c32e573d1569132ee3f06dfb6"} Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.549927 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerDied","Data":"cb6d06c38f976feb1cb400142c94c846180c10a5200e7df25e3c5053c66cb609"} Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.549982 4985 scope.go:117] "RemoveContainer" containerID="fce548919236fde4eb5c4991efb646d47ab79f3a48995a81bc461b9b6f0a9077" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.550120 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.562764 4985 generic.go:334] "Generic (PLEG): container finished" podID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerID="d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a" exitCode=0 Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.562847 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerDied","Data":"d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a"} Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.562858 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.562878 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerDied","Data":"426d361783f148b2f6c2b7e23079a36d36f18ddb17a5125f59aee3cbdab7bba2"} Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.567675 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-wnljz" podStartSLOduration=2.586647104 podStartE2EDuration="11.567636977s" podCreationTimestamp="2026-01-28 18:39:16 +0000 UTC" firstStartedPulling="2026-01-28 18:39:18.003980063 +0000 UTC m=+1568.830542884" lastFinishedPulling="2026-01-28 18:39:26.984969936 +0000 UTC m=+1577.811532757" observedRunningTime="2026-01-28 18:39:27.566042092 +0000 UTC m=+1578.392604923" watchObservedRunningTime="2026-01-28 18:39:27.567636977 +0000 UTC m=+1578.394199798" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.575444 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-run-httpd\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.576031 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-config-data\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.576092 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-log-httpd\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.576168 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-sg-core-conf-yaml\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.576215 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-scripts\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.576255 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9frh\" (UniqueName: \"kubernetes.io/projected/f65f780c-a6a6-4e63-a21c-962724bb8c56-kube-api-access-k9frh\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.576296 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-combined-ca-bundle\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.578389 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.579461 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.579486 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.597534 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65f780c-a6a6-4e63-a21c-962724bb8c56-kube-api-access-k9frh" (OuterVolumeSpecName: "kube-api-access-k9frh") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "kube-api-access-k9frh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.601797 4985 scope.go:117] "RemoveContainer" containerID="ac4c636c19c5a93172c99e41217794568a75dad0ad348a3d4022d6d7bcdfe984" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.604467 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-scripts" (OuterVolumeSpecName: "scripts") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.611683 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbtp6"] Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.619285 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.628907 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mbtp6"] Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.632102 4985 scope.go:117] "RemoveContainer" containerID="c90878479aa212272619165fb9e5e236c18feef83564d0b2ea60daad9b1b13ff" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.668604 4985 scope.go:117] "RemoveContainer" containerID="ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.674609 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.680775 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.680816 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.680832 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.680846 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9frh\" (UniqueName: \"kubernetes.io/projected/f65f780c-a6a6-4e63-a21c-962724bb8c56-kube-api-access-k9frh\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.680858 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.680871 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.689660 4985 scope.go:117] "RemoveContainer" containerID="5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.712832 4985 scope.go:117] "RemoveContainer" containerID="f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.733831 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-config-data" (OuterVolumeSpecName: "config-data") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.737581 4985 scope.go:117] "RemoveContainer" containerID="d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.762787 4985 scope.go:117] "RemoveContainer" containerID="ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843" Jan 28 18:39:27 crc kubenswrapper[4985]: E0128 18:39:27.763627 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843\": container with ID starting with ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843 not found: ID does not exist" containerID="ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.763666 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843"} err="failed to get container status \"ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843\": rpc error: code = NotFound desc = could not find container \"ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843\": container with ID starting with ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843 not found: ID does not exist" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.763695 4985 scope.go:117] "RemoveContainer" containerID="5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b" Jan 28 18:39:27 crc kubenswrapper[4985]: E0128 18:39:27.764146 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b\": container with ID starting with 5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b not found: ID does not exist" containerID="5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.764199 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b"} err="failed to get container status \"5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b\": rpc error: code = NotFound desc = could not find container \"5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b\": container with ID starting with 5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b not found: ID does not exist" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.764253 4985 scope.go:117] "RemoveContainer" containerID="f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531" Jan 28 18:39:27 crc kubenswrapper[4985]: E0128 18:39:27.764632 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531\": container with ID starting with f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531 not found: ID does not exist" containerID="f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.764680 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531"} err="failed to get container status \"f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531\": rpc error: code = NotFound desc = could not find container \"f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531\": container with ID starting with f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531 not found: ID does not exist" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.764731 4985 scope.go:117] "RemoveContainer" containerID="d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a" Jan 28 18:39:27 crc kubenswrapper[4985]: E0128 18:39:27.765602 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a\": container with ID starting with d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a not found: ID does not exist" containerID="d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.765634 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a"} err="failed to get container status \"d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a\": rpc error: code = NotFound desc = could not find container \"d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a\": container with ID starting with d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a not found: ID does not exist" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.783484 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.953751 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.990269 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.011719 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.012684 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="proxy-httpd" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.012796 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="proxy-httpd" Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.012921 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-central-agent" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.013000 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-central-agent" Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.013098 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="extract-content" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.013176 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="extract-content" Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.013282 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.013377 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.013476 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-notification-agent" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.013561 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-notification-agent" Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.013651 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="sg-core" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.013736 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="sg-core" Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.013857 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="extract-utilities" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.013942 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="extract-utilities" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.014315 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.014432 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-notification-agent" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.014542 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="sg-core" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.014627 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="proxy-httpd" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.014715 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-central-agent" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.018042 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.020770 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.020936 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.023198 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202235 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202413 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202539 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-config-data\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202612 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-scripts\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202672 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-run-httpd\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202766 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmjp5\" (UniqueName: \"kubernetes.io/projected/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-kube-api-access-kmjp5\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202824 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-log-httpd\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.304526 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.304822 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.304861 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-config-data\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.304919 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-scripts\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.304943 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-run-httpd\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.304978 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmjp5\" (UniqueName: \"kubernetes.io/projected/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-kube-api-access-kmjp5\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.305089 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-log-httpd\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.305535 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-run-httpd\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.305606 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-log-httpd\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.310097 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-config-data\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.311785 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-scripts\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.313523 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.313954 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.322562 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmjp5\" (UniqueName: \"kubernetes.io/projected/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-kube-api-access-kmjp5\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.336539 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:29 crc kubenswrapper[4985]: I0128 18:39:28.871977 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:29 crc kubenswrapper[4985]: W0128 18:39:28.875934 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfafcaaa1_299d_4b1a_945c_d6c06e9f9a17.slice/crio-c176f3db693db26b4b11e7d279bdbb6d8155e787e284d88654ff8d9cec7a895c WatchSource:0}: Error finding container c176f3db693db26b4b11e7d279bdbb6d8155e787e284d88654ff8d9cec7a895c: Status 404 returned error can't find the container with id c176f3db693db26b4b11e7d279bdbb6d8155e787e284d88654ff8d9cec7a895c Jan 28 18:39:29 crc kubenswrapper[4985]: I0128 18:39:29.277426 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ebe025a-cece-4723-928f-b6649ea27040" path="/var/lib/kubelet/pods/1ebe025a-cece-4723-928f-b6649ea27040/volumes" Jan 28 18:39:29 crc kubenswrapper[4985]: I0128 18:39:29.278666 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" path="/var/lib/kubelet/pods/f65f780c-a6a6-4e63-a21c-962724bb8c56/volumes" Jan 28 18:39:29 crc kubenswrapper[4985]: I0128 18:39:29.628307 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerStarted","Data":"c176f3db693db26b4b11e7d279bdbb6d8155e787e284d88654ff8d9cec7a895c"} Jan 28 18:39:30 crc kubenswrapper[4985]: I0128 18:39:30.642818 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerStarted","Data":"83b905d7d95bc6cd0981a583594161bc4777deea33d9b61625db86d913647db2"} Jan 28 18:39:30 crc kubenswrapper[4985]: I0128 18:39:30.643321 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerStarted","Data":"0376e2b9794b6228890e4d1bb0e26eaf2787c09895a2b741d2221058843f9877"} Jan 28 18:39:31 crc kubenswrapper[4985]: I0128 18:39:31.658611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerStarted","Data":"448b8e6a5d87ea9a4baa28189d60e2366cd52810b2e6cb329f7855ad524e2ac4"} Jan 28 18:39:34 crc kubenswrapper[4985]: I0128 18:39:34.695233 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerStarted","Data":"602dc3529f5f964e2a5109933c2fcd4ae1318fc03d0fc85357efd26a6f89a33c"} Jan 28 18:39:34 crc kubenswrapper[4985]: I0128 18:39:34.695861 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:39:34 crc kubenswrapper[4985]: I0128 18:39:34.727592 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.438711212 podStartE2EDuration="7.727568958s" podCreationTimestamp="2026-01-28 18:39:27 +0000 UTC" firstStartedPulling="2026-01-28 18:39:28.878110645 +0000 UTC m=+1579.704673466" lastFinishedPulling="2026-01-28 18:39:34.166968391 +0000 UTC m=+1584.993531212" observedRunningTime="2026-01-28 18:39:34.720031645 +0000 UTC m=+1585.546594466" watchObservedRunningTime="2026-01-28 18:39:34.727568958 +0000 UTC m=+1585.554131779" Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.384131 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.384776 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-central-agent" containerID="cri-o://0376e2b9794b6228890e4d1bb0e26eaf2787c09895a2b741d2221058843f9877" gracePeriod=30 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.385918 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="sg-core" containerID="cri-o://448b8e6a5d87ea9a4baa28189d60e2366cd52810b2e6cb329f7855ad524e2ac4" gracePeriod=30 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.385936 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-notification-agent" containerID="cri-o://83b905d7d95bc6cd0981a583594161bc4777deea33d9b61625db86d913647db2" gracePeriod=30 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.386377 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="proxy-httpd" containerID="cri-o://602dc3529f5f964e2a5109933c2fcd4ae1318fc03d0fc85357efd26a6f89a33c" gracePeriod=30 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.739033 4985 generic.go:334] "Generic (PLEG): container finished" podID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerID="602dc3529f5f964e2a5109933c2fcd4ae1318fc03d0fc85357efd26a6f89a33c" exitCode=0 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.739313 4985 generic.go:334] "Generic (PLEG): container finished" podID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerID="448b8e6a5d87ea9a4baa28189d60e2366cd52810b2e6cb329f7855ad524e2ac4" exitCode=2 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.739401 4985 generic.go:334] "Generic (PLEG): container finished" podID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerID="83b905d7d95bc6cd0981a583594161bc4777deea33d9b61625db86d913647db2" exitCode=0 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.739489 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerDied","Data":"602dc3529f5f964e2a5109933c2fcd4ae1318fc03d0fc85357efd26a6f89a33c"} Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.739580 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerDied","Data":"448b8e6a5d87ea9a4baa28189d60e2366cd52810b2e6cb329f7855ad524e2ac4"} Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.739652 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerDied","Data":"83b905d7d95bc6cd0981a583594161bc4777deea33d9b61625db86d913647db2"} Jan 28 18:39:40 crc kubenswrapper[4985]: I0128 18:39:40.777021 4985 generic.go:334] "Generic (PLEG): container finished" podID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerID="0376e2b9794b6228890e4d1bb0e26eaf2787c09895a2b741d2221058843f9877" exitCode=0 Jan 28 18:39:40 crc kubenswrapper[4985]: I0128 18:39:40.777104 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerDied","Data":"0376e2b9794b6228890e4d1bb0e26eaf2787c09895a2b741d2221058843f9877"} Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.134102 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.186015 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.186076 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.186121 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.186982 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.187041 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" gracePeriod=600 Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.226643 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-log-httpd\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.226784 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-combined-ca-bundle\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.226976 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmjp5\" (UniqueName: \"kubernetes.io/projected/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-kube-api-access-kmjp5\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.227097 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-scripts\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.227154 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-config-data\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.227233 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-run-httpd\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.227240 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.227328 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-sg-core-conf-yaml\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.227982 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.228060 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.233954 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-kube-api-access-kmjp5" (OuterVolumeSpecName: "kube-api-access-kmjp5") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "kube-api-access-kmjp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.234513 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-scripts" (OuterVolumeSpecName: "scripts") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.280776 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: E0128 18:39:41.316609 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.331483 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmjp5\" (UniqueName: \"kubernetes.io/projected/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-kube-api-access-kmjp5\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.331545 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.331559 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.331572 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.332528 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.395207 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-config-data" (OuterVolumeSpecName: "config-data") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.434202 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.434477 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.791771 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerDied","Data":"c176f3db693db26b4b11e7d279bdbb6d8155e787e284d88654ff8d9cec7a895c"} Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.791825 4985 scope.go:117] "RemoveContainer" containerID="602dc3529f5f964e2a5109933c2fcd4ae1318fc03d0fc85357efd26a6f89a33c" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.791853 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.796058 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" exitCode=0 Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.796094 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108"} Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.796519 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:39:41 crc kubenswrapper[4985]: E0128 18:39:41.797998 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.832455 4985 scope.go:117] "RemoveContainer" containerID="448b8e6a5d87ea9a4baa28189d60e2366cd52810b2e6cb329f7855ad524e2ac4" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.869679 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.881937 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.913982 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:41 crc kubenswrapper[4985]: E0128 18:39:41.914921 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-notification-agent" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.914949 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-notification-agent" Jan 28 18:39:41 crc kubenswrapper[4985]: E0128 18:39:41.914966 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-central-agent" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.914974 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-central-agent" Jan 28 18:39:41 crc kubenswrapper[4985]: E0128 18:39:41.915006 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="proxy-httpd" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.915014 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="proxy-httpd" Jan 28 18:39:41 crc kubenswrapper[4985]: E0128 18:39:41.915047 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="sg-core" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.915054 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="sg-core" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.915354 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-central-agent" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.915385 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="proxy-httpd" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.915404 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="sg-core" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.915416 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-notification-agent" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.916003 4985 scope.go:117] "RemoveContainer" containerID="83b905d7d95bc6cd0981a583594161bc4777deea33d9b61625db86d913647db2" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.918591 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.923300 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.923566 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.955174 4985 scope.go:117] "RemoveContainer" containerID="0376e2b9794b6228890e4d1bb0e26eaf2787c09895a2b741d2221058843f9877" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.969190 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.980399 4985 scope.go:117] "RemoveContainer" containerID="236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.046525 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.046751 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxl47\" (UniqueName: \"kubernetes.io/projected/d2c9e260-5f3f-4c90-a567-384b852ce092-kube-api-access-xxl47\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.046806 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-scripts\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.046830 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.046953 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-config-data\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.046990 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-log-httpd\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.047016 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-run-httpd\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.149164 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxl47\" (UniqueName: \"kubernetes.io/projected/d2c9e260-5f3f-4c90-a567-384b852ce092-kube-api-access-xxl47\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.149903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-scripts\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.150044 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.150212 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-config-data\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.150619 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-log-httpd\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.150721 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-run-httpd\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.150827 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.151817 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-log-httpd\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.151832 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-run-httpd\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.159351 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-scripts\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.160738 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-config-data\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.164761 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.168463 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxl47\" (UniqueName: \"kubernetes.io/projected/d2c9e260-5f3f-4c90-a567-384b852ce092-kube-api-access-xxl47\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.171721 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.249854 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.795483 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:42 crc kubenswrapper[4985]: W0128 18:39:42.796482 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2c9e260_5f3f_4c90_a567_384b852ce092.slice/crio-ad59f3e71444e3331f4682b452af75124995ead8fabd303f85cb7e005460e9cc WatchSource:0}: Error finding container ad59f3e71444e3331f4682b452af75124995ead8fabd303f85cb7e005460e9cc: Status 404 returned error can't find the container with id ad59f3e71444e3331f4682b452af75124995ead8fabd303f85cb7e005460e9cc Jan 28 18:39:43 crc kubenswrapper[4985]: I0128 18:39:43.282008 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" path="/var/lib/kubelet/pods/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17/volumes" Jan 28 18:39:43 crc kubenswrapper[4985]: I0128 18:39:43.826736 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerStarted","Data":"9c0b02ff2b6094e1fbd6d2a06391fd74bcc3b3f2cb8793a231a1aacfaa49b292"} Jan 28 18:39:43 crc kubenswrapper[4985]: I0128 18:39:43.827095 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerStarted","Data":"ad59f3e71444e3331f4682b452af75124995ead8fabd303f85cb7e005460e9cc"} Jan 28 18:39:44 crc kubenswrapper[4985]: I0128 18:39:44.843556 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerStarted","Data":"c945cbfbb90b3d9c0637bc1334eb04e9240f9d240e95b40212143dd3b57622f6"} Jan 28 18:39:45 crc kubenswrapper[4985]: I0128 18:39:45.859081 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerStarted","Data":"a76189df723ddef4048b2fa893a4c6ec36f2c8a3346dfe8bd8fc5384f88ec056"} Jan 28 18:39:47 crc kubenswrapper[4985]: I0128 18:39:47.888217 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerStarted","Data":"d33549903e378eb3f2c50c5fa055b35792cec086074c052966d40b8ef4df1d6b"} Jan 28 18:39:47 crc kubenswrapper[4985]: I0128 18:39:47.888885 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:39:47 crc kubenswrapper[4985]: I0128 18:39:47.916511 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.134068043 podStartE2EDuration="6.916487182s" podCreationTimestamp="2026-01-28 18:39:41 +0000 UTC" firstStartedPulling="2026-01-28 18:39:42.798961852 +0000 UTC m=+1593.625524673" lastFinishedPulling="2026-01-28 18:39:47.581380991 +0000 UTC m=+1598.407943812" observedRunningTime="2026-01-28 18:39:47.908527707 +0000 UTC m=+1598.735090528" watchObservedRunningTime="2026-01-28 18:39:47.916487182 +0000 UTC m=+1598.743050003" Jan 28 18:39:49 crc kubenswrapper[4985]: I0128 18:39:49.919799 4985 generic.go:334] "Generic (PLEG): container finished" podID="df5e9657-f657-4f0e-9d46-31c6942e70d2" containerID="ea52163bdf8a3e8c42d7f0dbeffc6baafb9ed87c32e573d1569132ee3f06dfb6" exitCode=0 Jan 28 18:39:49 crc kubenswrapper[4985]: I0128 18:39:49.919928 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wnljz" event={"ID":"df5e9657-f657-4f0e-9d46-31c6942e70d2","Type":"ContainerDied","Data":"ea52163bdf8a3e8c42d7f0dbeffc6baafb9ed87c32e573d1569132ee3f06dfb6"} Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.688080 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.696535 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.772347 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp56n\" (UniqueName: \"kubernetes.io/projected/1373681b-8290-4963-897b-b5b27690e19a-kube-api-access-cp56n\") pod \"1373681b-8290-4963-897b-b5b27690e19a\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.772702 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data\") pod \"89fc2c75-41eb-441e-a171-5c716b823277\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.772940 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-combined-ca-bundle\") pod \"89fc2c75-41eb-441e-a171-5c716b823277\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.772985 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data\") pod \"1373681b-8290-4963-897b-b5b27690e19a\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.773033 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data-custom\") pod \"1373681b-8290-4963-897b-b5b27690e19a\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.773066 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79bgd\" (UniqueName: \"kubernetes.io/projected/89fc2c75-41eb-441e-a171-5c716b823277-kube-api-access-79bgd\") pod \"89fc2c75-41eb-441e-a171-5c716b823277\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.773106 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data-custom\") pod \"89fc2c75-41eb-441e-a171-5c716b823277\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.773181 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-combined-ca-bundle\") pod \"1373681b-8290-4963-897b-b5b27690e19a\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.781666 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1373681b-8290-4963-897b-b5b27690e19a" (UID: "1373681b-8290-4963-897b-b5b27690e19a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.781783 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "89fc2c75-41eb-441e-a171-5c716b823277" (UID: "89fc2c75-41eb-441e-a171-5c716b823277"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.784593 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1373681b-8290-4963-897b-b5b27690e19a-kube-api-access-cp56n" (OuterVolumeSpecName: "kube-api-access-cp56n") pod "1373681b-8290-4963-897b-b5b27690e19a" (UID: "1373681b-8290-4963-897b-b5b27690e19a"). InnerVolumeSpecName "kube-api-access-cp56n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.785294 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fc2c75-41eb-441e-a171-5c716b823277-kube-api-access-79bgd" (OuterVolumeSpecName: "kube-api-access-79bgd") pod "89fc2c75-41eb-441e-a171-5c716b823277" (UID: "89fc2c75-41eb-441e-a171-5c716b823277"). InnerVolumeSpecName "kube-api-access-79bgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.823426 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1373681b-8290-4963-897b-b5b27690e19a" (UID: "1373681b-8290-4963-897b-b5b27690e19a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.848637 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89fc2c75-41eb-441e-a171-5c716b823277" (UID: "89fc2c75-41eb-441e-a171-5c716b823277"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.857207 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data" (OuterVolumeSpecName: "config-data") pod "89fc2c75-41eb-441e-a171-5c716b823277" (UID: "89fc2c75-41eb-441e-a171-5c716b823277"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.874310 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data" (OuterVolumeSpecName: "config-data") pod "1373681b-8290-4963-897b-b5b27690e19a" (UID: "1373681b-8290-4963-897b-b5b27690e19a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875170 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data\") pod \"1373681b-8290-4963-897b-b5b27690e19a\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " Jan 28 18:39:50 crc kubenswrapper[4985]: W0128 18:39:50.875305 4985 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1373681b-8290-4963-897b-b5b27690e19a/volumes/kubernetes.io~secret/config-data Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875327 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data" (OuterVolumeSpecName: "config-data") pod "1373681b-8290-4963-897b-b5b27690e19a" (UID: "1373681b-8290-4963-897b-b5b27690e19a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875917 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp56n\" (UniqueName: \"kubernetes.io/projected/1373681b-8290-4963-897b-b5b27690e19a-kube-api-access-cp56n\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875946 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875955 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875965 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875974 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875985 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79bgd\" (UniqueName: \"kubernetes.io/projected/89fc2c75-41eb-441e-a171-5c716b823277-kube-api-access-79bgd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875997 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.876005 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.936652 4985 generic.go:334] "Generic (PLEG): container finished" podID="1373681b-8290-4963-897b-b5b27690e19a" containerID="0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a" exitCode=137 Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.936744 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" event={"ID":"1373681b-8290-4963-897b-b5b27690e19a","Type":"ContainerDied","Data":"0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a"} Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.936784 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" event={"ID":"1373681b-8290-4963-897b-b5b27690e19a","Type":"ContainerDied","Data":"7f8aaec146afdcb274b6be4540ed468073cb056ab2a74bd69ec462b02099487a"} Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.936702 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.936809 4985 scope.go:117] "RemoveContainer" containerID="0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.951869 4985 generic.go:334] "Generic (PLEG): container finished" podID="89fc2c75-41eb-441e-a171-5c716b823277" containerID="06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00" exitCode=137 Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.952335 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5965d558dc-cg7wv" event={"ID":"89fc2c75-41eb-441e-a171-5c716b823277","Type":"ContainerDied","Data":"06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00"} Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.952410 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5965d558dc-cg7wv" event={"ID":"89fc2c75-41eb-441e-a171-5c716b823277","Type":"ContainerDied","Data":"af15e77d0cac085450dbdbf09aea29f94aab86926bae124219c8abb6e3a9c5c2"} Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.952371 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.010944 4985 scope.go:117] "RemoveContainer" containerID="0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a" Jan 28 18:39:51 crc kubenswrapper[4985]: E0128 18:39:51.015922 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a\": container with ID starting with 0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a not found: ID does not exist" containerID="0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.016096 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a"} err="failed to get container status \"0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a\": rpc error: code = NotFound desc = could not find container \"0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a\": container with ID starting with 0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a not found: ID does not exist" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.016130 4985 scope.go:117] "RemoveContainer" containerID="06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.037342 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84b7b4c956-xs5qg"] Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.061568 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-84b7b4c956-xs5qg"] Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.081631 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5965d558dc-cg7wv"] Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.098595 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5965d558dc-cg7wv"] Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.132654 4985 scope.go:117] "RemoveContainer" containerID="06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00" Jan 28 18:39:51 crc kubenswrapper[4985]: E0128 18:39:51.133459 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00\": container with ID starting with 06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00 not found: ID does not exist" containerID="06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.133515 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00"} err="failed to get container status \"06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00\": rpc error: code = NotFound desc = could not find container \"06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00\": container with ID starting with 06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00 not found: ID does not exist" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.318375 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1373681b-8290-4963-897b-b5b27690e19a" path="/var/lib/kubelet/pods/1373681b-8290-4963-897b-b5b27690e19a/volumes" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.319749 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89fc2c75-41eb-441e-a171-5c716b823277" path="/var/lib/kubelet/pods/89fc2c75-41eb-441e-a171-5c716b823277/volumes" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.695903 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.825705 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-combined-ca-bundle\") pod \"df5e9657-f657-4f0e-9d46-31c6942e70d2\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.825886 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gpjb\" (UniqueName: \"kubernetes.io/projected/df5e9657-f657-4f0e-9d46-31c6942e70d2-kube-api-access-8gpjb\") pod \"df5e9657-f657-4f0e-9d46-31c6942e70d2\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.826026 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-config-data\") pod \"df5e9657-f657-4f0e-9d46-31c6942e70d2\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.826069 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-scripts\") pod \"df5e9657-f657-4f0e-9d46-31c6942e70d2\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.834849 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df5e9657-f657-4f0e-9d46-31c6942e70d2-kube-api-access-8gpjb" (OuterVolumeSpecName: "kube-api-access-8gpjb") pod "df5e9657-f657-4f0e-9d46-31c6942e70d2" (UID: "df5e9657-f657-4f0e-9d46-31c6942e70d2"). InnerVolumeSpecName "kube-api-access-8gpjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.871448 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-scripts" (OuterVolumeSpecName: "scripts") pod "df5e9657-f657-4f0e-9d46-31c6942e70d2" (UID: "df5e9657-f657-4f0e-9d46-31c6942e70d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.917230 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-config-data" (OuterVolumeSpecName: "config-data") pod "df5e9657-f657-4f0e-9d46-31c6942e70d2" (UID: "df5e9657-f657-4f0e-9d46-31c6942e70d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.934500 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gpjb\" (UniqueName: \"kubernetes.io/projected/df5e9657-f657-4f0e-9d46-31c6942e70d2-kube-api-access-8gpjb\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.935166 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.935325 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.935017 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df5e9657-f657-4f0e-9d46-31c6942e70d2" (UID: "df5e9657-f657-4f0e-9d46-31c6942e70d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.991987 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.992016 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wnljz" event={"ID":"df5e9657-f657-4f0e-9d46-31c6942e70d2","Type":"ContainerDied","Data":"7c964c71fbf53a73e02f741a55147e78ae61c3acf98bc98cef2fafebf5b6d13a"} Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.992062 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c964c71fbf53a73e02f741a55147e78ae61c3acf98bc98cef2fafebf5b6d13a" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.037406 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.076569 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:39:52 crc kubenswrapper[4985]: E0128 18:39:52.077356 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1373681b-8290-4963-897b-b5b27690e19a" containerName="heat-cfnapi" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.077444 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1373681b-8290-4963-897b-b5b27690e19a" containerName="heat-cfnapi" Jan 28 18:39:52 crc kubenswrapper[4985]: E0128 18:39:52.077518 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fc2c75-41eb-441e-a171-5c716b823277" containerName="heat-api" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.077581 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fc2c75-41eb-441e-a171-5c716b823277" containerName="heat-api" Jan 28 18:39:52 crc kubenswrapper[4985]: E0128 18:39:52.077681 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5e9657-f657-4f0e-9d46-31c6942e70d2" containerName="nova-cell0-conductor-db-sync" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.077743 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5e9657-f657-4f0e-9d46-31c6942e70d2" containerName="nova-cell0-conductor-db-sync" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.078126 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fc2c75-41eb-441e-a171-5c716b823277" containerName="heat-api" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.078205 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1373681b-8290-4963-897b-b5b27690e19a" containerName="heat-cfnapi" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.078303 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="df5e9657-f657-4f0e-9d46-31c6942e70d2" containerName="nova-cell0-conductor-db-sync" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.079666 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.084480 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5bk5t" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.085365 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.118861 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.139299 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b595e2-b61a-4921-8d69-28adfa53f6bb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.139368 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b595e2-b61a-4921-8d69-28adfa53f6bb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.139466 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr24j\" (UniqueName: \"kubernetes.io/projected/78b595e2-b61a-4921-8d69-28adfa53f6bb-kube-api-access-mr24j\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.242531 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b595e2-b61a-4921-8d69-28adfa53f6bb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.242912 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b595e2-b61a-4921-8d69-28adfa53f6bb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.242979 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr24j\" (UniqueName: \"kubernetes.io/projected/78b595e2-b61a-4921-8d69-28adfa53f6bb-kube-api-access-mr24j\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.249811 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b595e2-b61a-4921-8d69-28adfa53f6bb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.251040 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b595e2-b61a-4921-8d69-28adfa53f6bb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.274406 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr24j\" (UniqueName: \"kubernetes.io/projected/78b595e2-b61a-4921-8d69-28adfa53f6bb-kube-api-access-mr24j\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.434773 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: W0128 18:39:52.958845 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78b595e2_b61a_4921_8d69_28adfa53f6bb.slice/crio-56d7b474e9e05b7e9f81054b94df470cc8b190149f659e85ea39f20d4f2ba2e9 WatchSource:0}: Error finding container 56d7b474e9e05b7e9f81054b94df470cc8b190149f659e85ea39f20d4f2ba2e9: Status 404 returned error can't find the container with id 56d7b474e9e05b7e9f81054b94df470cc8b190149f659e85ea39f20d4f2ba2e9 Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.961045 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:39:53 crc kubenswrapper[4985]: I0128 18:39:53.010666 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"78b595e2-b61a-4921-8d69-28adfa53f6bb","Type":"ContainerStarted","Data":"56d7b474e9e05b7e9f81054b94df470cc8b190149f659e85ea39f20d4f2ba2e9"} Jan 28 18:39:54 crc kubenswrapper[4985]: I0128 18:39:54.025161 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"78b595e2-b61a-4921-8d69-28adfa53f6bb","Type":"ContainerStarted","Data":"ba93ebf5042eedb0f2f0a021ef445a90bb3767dfa7ad40c16120aa4c3cbcf755"} Jan 28 18:39:54 crc kubenswrapper[4985]: I0128 18:39:54.025790 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:54 crc kubenswrapper[4985]: I0128 18:39:54.046701 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.046682191 podStartE2EDuration="2.046682191s" podCreationTimestamp="2026-01-28 18:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:54.042115302 +0000 UTC m=+1604.868678133" watchObservedRunningTime="2026-01-28 18:39:54.046682191 +0000 UTC m=+1604.873245012" Jan 28 18:39:55 crc kubenswrapper[4985]: I0128 18:39:55.037027 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:55 crc kubenswrapper[4985]: I0128 18:39:55.037404 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-central-agent" containerID="cri-o://9c0b02ff2b6094e1fbd6d2a06391fd74bcc3b3f2cb8793a231a1aacfaa49b292" gracePeriod=30 Jan 28 18:39:55 crc kubenswrapper[4985]: I0128 18:39:55.037446 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-notification-agent" containerID="cri-o://c945cbfbb90b3d9c0637bc1334eb04e9240f9d240e95b40212143dd3b57622f6" gracePeriod=30 Jan 28 18:39:55 crc kubenswrapper[4985]: I0128 18:39:55.037459 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="sg-core" containerID="cri-o://a76189df723ddef4048b2fa893a4c6ec36f2c8a3346dfe8bd8fc5384f88ec056" gracePeriod=30 Jan 28 18:39:55 crc kubenswrapper[4985]: I0128 18:39:55.037512 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="proxy-httpd" containerID="cri-o://d33549903e378eb3f2c50c5fa055b35792cec086074c052966d40b8ef4df1d6b" gracePeriod=30 Jan 28 18:39:55 crc kubenswrapper[4985]: I0128 18:39:55.264491 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:39:55 crc kubenswrapper[4985]: E0128 18:39:55.265216 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046379 4985 generic.go:334] "Generic (PLEG): container finished" podID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerID="d33549903e378eb3f2c50c5fa055b35792cec086074c052966d40b8ef4df1d6b" exitCode=0 Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046710 4985 generic.go:334] "Generic (PLEG): container finished" podID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerID="a76189df723ddef4048b2fa893a4c6ec36f2c8a3346dfe8bd8fc5384f88ec056" exitCode=2 Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046722 4985 generic.go:334] "Generic (PLEG): container finished" podID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerID="c945cbfbb90b3d9c0637bc1334eb04e9240f9d240e95b40212143dd3b57622f6" exitCode=0 Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046731 4985 generic.go:334] "Generic (PLEG): container finished" podID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerID="9c0b02ff2b6094e1fbd6d2a06391fd74bcc3b3f2cb8793a231a1aacfaa49b292" exitCode=0 Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046442 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerDied","Data":"d33549903e378eb3f2c50c5fa055b35792cec086074c052966d40b8ef4df1d6b"} Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046766 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerDied","Data":"a76189df723ddef4048b2fa893a4c6ec36f2c8a3346dfe8bd8fc5384f88ec056"} Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046781 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerDied","Data":"c945cbfbb90b3d9c0637bc1334eb04e9240f9d240e95b40212143dd3b57622f6"} Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046793 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerDied","Data":"9c0b02ff2b6094e1fbd6d2a06391fd74bcc3b3f2cb8793a231a1aacfaa49b292"} Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.806441 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851190 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxl47\" (UniqueName: \"kubernetes.io/projected/d2c9e260-5f3f-4c90-a567-384b852ce092-kube-api-access-xxl47\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851329 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-config-data\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851424 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-combined-ca-bundle\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851505 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-sg-core-conf-yaml\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851556 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-run-httpd\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851628 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-scripts\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851672 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-log-httpd\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.853076 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.860312 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.886375 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c9e260-5f3f-4c90-a567-384b852ce092-kube-api-access-xxl47" (OuterVolumeSpecName: "kube-api-access-xxl47") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "kube-api-access-xxl47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.954474 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.954519 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.954534 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxl47\" (UniqueName: \"kubernetes.io/projected/d2c9e260-5f3f-4c90-a567-384b852ce092-kube-api-access-xxl47\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.969843 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-scripts" (OuterVolumeSpecName: "scripts") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.978428 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.051699 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.057266 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.057314 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.057326 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.091110 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerDied","Data":"ad59f3e71444e3331f4682b452af75124995ead8fabd303f85cb7e005460e9cc"} Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.091183 4985 scope.go:117] "RemoveContainer" containerID="d33549903e378eb3f2c50c5fa055b35792cec086074c052966d40b8ef4df1d6b" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.091852 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.119605 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-config-data" (OuterVolumeSpecName: "config-data") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.127756 4985 scope.go:117] "RemoveContainer" containerID="a76189df723ddef4048b2fa893a4c6ec36f2c8a3346dfe8bd8fc5384f88ec056" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.149324 4985 scope.go:117] "RemoveContainer" containerID="c945cbfbb90b3d9c0637bc1334eb04e9240f9d240e95b40212143dd3b57622f6" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.159442 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.173610 4985 scope.go:117] "RemoveContainer" containerID="9c0b02ff2b6094e1fbd6d2a06391fd74bcc3b3f2cb8793a231a1aacfaa49b292" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.429928 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.442843 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.453504 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:57 crc kubenswrapper[4985]: E0128 18:39:57.453983 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="sg-core" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454008 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="sg-core" Jan 28 18:39:57 crc kubenswrapper[4985]: E0128 18:39:57.454031 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-notification-agent" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454043 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-notification-agent" Jan 28 18:39:57 crc kubenswrapper[4985]: E0128 18:39:57.454070 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-central-agent" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454077 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-central-agent" Jan 28 18:39:57 crc kubenswrapper[4985]: E0128 18:39:57.454108 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="proxy-httpd" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454117 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="proxy-httpd" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454340 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="proxy-httpd" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454373 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-notification-agent" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454394 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="sg-core" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454401 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-central-agent" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.456405 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.466816 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.466939 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.474306 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570436 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-log-httpd\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570514 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570541 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-run-httpd\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570559 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-scripts\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570606 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-config-data\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkjmm\" (UniqueName: \"kubernetes.io/projected/4bf14558-3072-45a9-bf6c-66d42c26bb42-kube-api-access-gkjmm\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570833 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673385 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-log-httpd\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673532 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673573 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-run-httpd\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673604 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-scripts\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673696 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-config-data\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673786 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkjmm\" (UniqueName: \"kubernetes.io/projected/4bf14558-3072-45a9-bf6c-66d42c26bb42-kube-api-access-gkjmm\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673906 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673951 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-log-httpd\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.674354 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-run-httpd\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.679345 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-scripts\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.679438 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.680503 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-config-data\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.681973 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.692425 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkjmm\" (UniqueName: \"kubernetes.io/projected/4bf14558-3072-45a9-bf6c-66d42c26bb42-kube-api-access-gkjmm\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.777706 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:58 crc kubenswrapper[4985]: I0128 18:39:58.298586 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:58 crc kubenswrapper[4985]: I0128 18:39:58.308121 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:39:59 crc kubenswrapper[4985]: I0128 18:39:59.133118 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerStarted","Data":"cda0d3d7eb455e4b9ead99374175951ce213d2d28aa9402eeb2c7090c5991dcb"} Jan 28 18:39:59 crc kubenswrapper[4985]: I0128 18:39:59.285921 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" path="/var/lib/kubelet/pods/d2c9e260-5f3f-4c90-a567-384b852ce092/volumes" Jan 28 18:40:00 crc kubenswrapper[4985]: I0128 18:40:00.144039 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerStarted","Data":"e830fa21da31aadc107ffb13c5dbc7439288531948ea73e3c3675b37b51f9caa"} Jan 28 18:40:00 crc kubenswrapper[4985]: I0128 18:40:00.144476 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerStarted","Data":"5843e8333b06785c57f83f1e4a0e1c4f7b7edb61800eb50282cf92c2c7396e5a"} Jan 28 18:40:02 crc kubenswrapper[4985]: I0128 18:40:02.190453 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerStarted","Data":"d31d4e4526cabd5446579b90e6e8ebe04239de7add61e7534b84bdc949e7941b"} Jan 28 18:40:02 crc kubenswrapper[4985]: I0128 18:40:02.483152 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.033580 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-m82mm"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.037414 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.044623 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.044759 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.048965 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m82mm"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.133069 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh45p\" (UniqueName: \"kubernetes.io/projected/14e43739-91f4-43c9-9b01-5f0574a3b150-kube-api-access-gh45p\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.133355 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-scripts\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.133444 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.133548 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-config-data\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.236674 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-scripts\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.236729 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.236772 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-config-data\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.236904 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh45p\" (UniqueName: \"kubernetes.io/projected/14e43739-91f4-43c9-9b01-5f0574a3b150-kube-api-access-gh45p\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.293366 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.293406 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh45p\" (UniqueName: \"kubernetes.io/projected/14e43739-91f4-43c9-9b01-5f0574a3b150-kube-api-access-gh45p\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.297814 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-scripts\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.300406 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-config-data\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.357062 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.358696 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.362401 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.363245 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.404052 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.441585 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.443226 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.455746 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.458503 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcg58\" (UniqueName: \"kubernetes.io/projected/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-kube-api-access-wcg58\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.458817 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.458991 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-config-data\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.501637 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.562484 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.565129 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.565188 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-config-data\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.565210 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkmsv\" (UniqueName: \"kubernetes.io/projected/adbc3193-99ed-4a75-848b-6b98dfef1d3a-kube-api-access-vkmsv\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.565303 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcg58\" (UniqueName: \"kubernetes.io/projected/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-kube-api-access-wcg58\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.565336 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.565358 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.583375 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.587076 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.588456 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-config-data\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.611523 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.747104 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-jdztq"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.749130 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.749226 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkmsv\" (UniqueName: \"kubernetes.io/projected/adbc3193-99ed-4a75-848b-6b98dfef1d3a-kube-api-access-vkmsv\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.750186 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.751238 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-config-data\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.751682 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.751747 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.751782 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9094cf8a-0196-4d57-9b52-c433eece1088-logs\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.751833 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvpbb\" (UniqueName: \"kubernetes.io/projected/9094cf8a-0196-4d57-9b52-c433eece1088-kube-api-access-jvpbb\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.768305 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.770492 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.802082 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkmsv\" (UniqueName: \"kubernetes.io/projected/adbc3193-99ed-4a75-848b-6b98dfef1d3a-kube-api-access-vkmsv\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.831531 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcg58\" (UniqueName: \"kubernetes.io/projected/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-kube-api-access-wcg58\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.863182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.864533 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9094cf8a-0196-4d57-9b52-c433eece1088-logs\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.879471 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9094cf8a-0196-4d57-9b52-c433eece1088-logs\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.879608 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvpbb\" (UniqueName: \"kubernetes.io/projected/9094cf8a-0196-4d57-9b52-c433eece1088-kube-api-access-jvpbb\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.879648 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct9fb\" (UniqueName: \"kubernetes.io/projected/c2578b35-7408-46ed-bcee-8b0ff114cd33-kube-api-access-ct9fb\") pod \"aodh-db-create-jdztq\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.879751 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2578b35-7408-46ed-bcee-8b0ff114cd33-operator-scripts\") pod \"aodh-db-create-jdztq\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.880362 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-config-data\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.888784 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.890041 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-config-data\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.911150 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-jdztq"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.953824 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvpbb\" (UniqueName: \"kubernetes.io/projected/9094cf8a-0196-4d57-9b52-c433eece1088-kube-api-access-jvpbb\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.953904 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.962172 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-682b-account-create-update-fphsf"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.965001 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.969753 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.984726 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-operator-scripts\") pod \"aodh-682b-account-create-update-fphsf\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.984884 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct9fb\" (UniqueName: \"kubernetes.io/projected/c2578b35-7408-46ed-bcee-8b0ff114cd33-kube-api-access-ct9fb\") pod \"aodh-db-create-jdztq\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.984922 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2578b35-7408-46ed-bcee-8b0ff114cd33-operator-scripts\") pod \"aodh-db-create-jdztq\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.984987 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjndf\" (UniqueName: \"kubernetes.io/projected/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-kube-api-access-gjndf\") pod \"aodh-682b-account-create-update-fphsf\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.986011 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2578b35-7408-46ed-bcee-8b0ff114cd33-operator-scripts\") pod \"aodh-db-create-jdztq\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.989767 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:03.996980 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.003419 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.039018 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-682b-account-create-update-fphsf"] Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.039112 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct9fb\" (UniqueName: \"kubernetes.io/projected/c2578b35-7408-46ed-bcee-8b0ff114cd33-kube-api-access-ct9fb\") pod \"aodh-db-create-jdztq\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.046115 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.050299 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.077558 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hjzhw"] Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.084287 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.088620 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.089571 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hjzhw"] Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.117309 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.118802 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.128522 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-logs\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.128687 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.128876 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjndf\" (UniqueName: \"kubernetes.io/projected/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-kube-api-access-gjndf\") pod \"aodh-682b-account-create-update-fphsf\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.128925 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-config-data\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.128989 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9dlf\" (UniqueName: \"kubernetes.io/projected/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-kube-api-access-j9dlf\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.129071 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-operator-scripts\") pod \"aodh-682b-account-create-update-fphsf\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.129893 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-operator-scripts\") pod \"aodh-682b-account-create-update-fphsf\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.175570 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjndf\" (UniqueName: \"kubernetes.io/projected/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-kube-api-access-gjndf\") pod \"aodh-682b-account-create-update-fphsf\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231114 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231188 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d694m\" (UniqueName: \"kubernetes.io/projected/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-kube-api-access-d694m\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231239 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-config-data\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231304 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9dlf\" (UniqueName: \"kubernetes.io/projected/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-kube-api-access-j9dlf\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231429 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-config\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231515 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-logs\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231545 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231602 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231633 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231655 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.238928 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-logs\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.249134 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.289506 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9dlf\" (UniqueName: \"kubernetes.io/projected/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-kube-api-access-j9dlf\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.291906 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-config-data\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.305079 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.337549 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d694m\" (UniqueName: \"kubernetes.io/projected/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-kube-api-access-d694m\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.337778 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-config\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.337870 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.337930 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.337955 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.338017 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.339036 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.339856 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-config\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.340309 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.340409 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.340671 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.340903 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.363451 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d694m\" (UniqueName: \"kubernetes.io/projected/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-kube-api-access-d694m\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.432145 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.559615 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m82mm"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.154225 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rxz6k"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.157449 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.164084 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.166933 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.176892 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rxz6k"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.276516 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-config-data\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.276652 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.283125 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8d9q\" (UniqueName: \"kubernetes.io/projected/dc545ce7-58a7-4757-8eab-8b0a28570a49-kube-api-access-z8d9q\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.283260 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-scripts\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.351308 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.377584 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.385697 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-scripts\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.385856 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-config-data\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.386104 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.386175 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8d9q\" (UniqueName: \"kubernetes.io/projected/dc545ce7-58a7-4757-8eab-8b0a28570a49-kube-api-access-z8d9q\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.396939 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-config-data\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.397380 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-scripts\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.401348 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.405478 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.410910 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8d9q\" (UniqueName: \"kubernetes.io/projected/dc545ce7-58a7-4757-8eab-8b0a28570a49-kube-api-access-z8d9q\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.477159 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b5f547e-c916-40cd-8f40-5fc2b482a4f4","Type":"ContainerStarted","Data":"d67f49419ddc18736265dbf8231bcf89cd6ee9def418fabf88a409ff0a470ae3"} Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.491201 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerStarted","Data":"b9e54c9390ac19ce9b01014af01e84d06209440198802b57b8ed1093cd72b389"} Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.491456 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.495319 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9094cf8a-0196-4d57-9b52-c433eece1088","Type":"ContainerStarted","Data":"6ab1f97ac874b54ef01c0179a3153dd1ba3d40d00482df2197af30281a5558ed"} Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.499863 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"adbc3193-99ed-4a75-848b-6b98dfef1d3a","Type":"ContainerStarted","Data":"d8cf9fb9c6cec17cb1a2721de6a0e35c45b968fbf964f4ce2fc3f3f714ea3e1d"} Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.507068 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m82mm" event={"ID":"14e43739-91f4-43c9-9b01-5f0574a3b150","Type":"ContainerStarted","Data":"c83af2ab400014fc785ba01cb5de51bf84a3ea8da54f74af11e2f8a7b4d8bbce"} Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.507108 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m82mm" event={"ID":"14e43739-91f4-43c9-9b01-5f0574a3b150","Type":"ContainerStarted","Data":"c42ea52d09811fa700e48475032c542d4742677726b27e37a6c29c19e54b460e"} Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.533941 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.715276699 podStartE2EDuration="8.533912702s" podCreationTimestamp="2026-01-28 18:39:57 +0000 UTC" firstStartedPulling="2026-01-28 18:39:58.307839484 +0000 UTC m=+1609.134402305" lastFinishedPulling="2026-01-28 18:40:04.126475487 +0000 UTC m=+1614.953038308" observedRunningTime="2026-01-28 18:40:05.524240959 +0000 UTC m=+1616.350803780" watchObservedRunningTime="2026-01-28 18:40:05.533912702 +0000 UTC m=+1616.360475523" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.559862 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-m82mm" podStartSLOduration=2.559839564 podStartE2EDuration="2.559839564s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:05.544495091 +0000 UTC m=+1616.371057912" watchObservedRunningTime="2026-01-28 18:40:05.559839564 +0000 UTC m=+1616.386402385" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.704037 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.805979 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-jdztq"] Jan 28 18:40:05 crc kubenswrapper[4985]: W0128 18:40:05.826441 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2578b35_7408_46ed_bcee_8b0ff114cd33.slice/crio-1101d2f73836c5aa3c89354862e987ca0831edcb43c759a121a3fdc6fa8510c0 WatchSource:0}: Error finding container 1101d2f73836c5aa3c89354862e987ca0831edcb43c759a121a3fdc6fa8510c0: Status 404 returned error can't find the container with id 1101d2f73836c5aa3c89354862e987ca0831edcb43c759a121a3fdc6fa8510c0 Jan 28 18:40:05 crc kubenswrapper[4985]: W0128 18:40:05.826803 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36263e10_c8a1_46f3_8fbd_b19bf25c48f5.slice/crio-313ea23ef8841271d1f96b426e8d01778c710d0948d5bd636293d482289c28dd WatchSource:0}: Error finding container 313ea23ef8841271d1f96b426e8d01778c710d0948d5bd636293d482289c28dd: Status 404 returned error can't find the container with id 313ea23ef8841271d1f96b426e8d01778c710d0948d5bd636293d482289c28dd Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.837560 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.891099 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hjzhw"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.943843 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-682b-account-create-update-fphsf"] Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.411944 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rxz6k"] Jan 28 18:40:06 crc kubenswrapper[4985]: W0128 18:40:06.454393 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc545ce7_58a7_4757_8eab_8b0a28570a49.slice/crio-bcdb112668eaf0b473e2a3decc00678922c53936b28b53ec4075246a540a99e9 WatchSource:0}: Error finding container bcdb112668eaf0b473e2a3decc00678922c53936b28b53ec4075246a540a99e9: Status 404 returned error can't find the container with id bcdb112668eaf0b473e2a3decc00678922c53936b28b53ec4075246a540a99e9 Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.585437 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" event={"ID":"dc545ce7-58a7-4757-8eab-8b0a28570a49","Type":"ContainerStarted","Data":"bcdb112668eaf0b473e2a3decc00678922c53936b28b53ec4075246a540a99e9"} Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.598945 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jdztq" event={"ID":"c2578b35-7408-46ed-bcee-8b0ff114cd33","Type":"ContainerStarted","Data":"178c7940c1e7c85eaf00e787d93879f89e3e05e71f11cbc272b8188e9429d0c9"} Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.599003 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jdztq" event={"ID":"c2578b35-7408-46ed-bcee-8b0ff114cd33","Type":"ContainerStarted","Data":"1101d2f73836c5aa3c89354862e987ca0831edcb43c759a121a3fdc6fa8510c0"} Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.607364 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36263e10-c8a1-46f3-8fbd-b19bf25c48f5","Type":"ContainerStarted","Data":"313ea23ef8841271d1f96b426e8d01778c710d0948d5bd636293d482289c28dd"} Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.619758 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" event={"ID":"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0","Type":"ContainerStarted","Data":"b12e09f6a40d1423b050a43aba39f7da27aac982d0fc418cb95ef0f8e230e6e1"} Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.628004 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-682b-account-create-update-fphsf" event={"ID":"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5","Type":"ContainerStarted","Data":"c4744d3e091d5fe137338eff5a0eae180d79a285c04bb7a04b679d4f0af6cc4d"} Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.629558 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-jdztq" podStartSLOduration=3.629534743 podStartE2EDuration="3.629534743s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:06.615097856 +0000 UTC m=+1617.441660697" watchObservedRunningTime="2026-01-28 18:40:06.629534743 +0000 UTC m=+1617.456097564" Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.262690 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.281660 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.647095 4985 generic.go:334] "Generic (PLEG): container finished" podID="21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" containerID="382f43a07ac5b420a95def886ddd1d4454cef25ffaca287fa20c580c3c9e42fc" exitCode=0 Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.647532 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-682b-account-create-update-fphsf" event={"ID":"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5","Type":"ContainerDied","Data":"382f43a07ac5b420a95def886ddd1d4454cef25ffaca287fa20c580c3c9e42fc"} Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.653468 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" event={"ID":"dc545ce7-58a7-4757-8eab-8b0a28570a49","Type":"ContainerStarted","Data":"5fa6b37534633df411a4bdc3fa77962a9df43667fb32532c9621de45df63d178"} Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.660368 4985 generic.go:334] "Generic (PLEG): container finished" podID="c2578b35-7408-46ed-bcee-8b0ff114cd33" containerID="178c7940c1e7c85eaf00e787d93879f89e3e05e71f11cbc272b8188e9429d0c9" exitCode=0 Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.660500 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jdztq" event={"ID":"c2578b35-7408-46ed-bcee-8b0ff114cd33","Type":"ContainerDied","Data":"178c7940c1e7c85eaf00e787d93879f89e3e05e71f11cbc272b8188e9429d0c9"} Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.673111 4985 generic.go:334] "Generic (PLEG): container finished" podID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerID="156d97e63d4214e7b4ebce332bf5ca2efd74529bc9a0eb50a6b04fcfb1f0fcab" exitCode=0 Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.673176 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" event={"ID":"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0","Type":"ContainerDied","Data":"156d97e63d4214e7b4ebce332bf5ca2efd74529bc9a0eb50a6b04fcfb1f0fcab"} Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.747857 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" podStartSLOduration=2.747838366 podStartE2EDuration="2.747838366s" podCreationTimestamp="2026-01-28 18:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:07.742449594 +0000 UTC m=+1618.569012415" watchObservedRunningTime="2026-01-28 18:40:07.747838366 +0000 UTC m=+1618.574401187" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.003779 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.150483 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2578b35-7408-46ed-bcee-8b0ff114cd33-operator-scripts\") pod \"c2578b35-7408-46ed-bcee-8b0ff114cd33\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.150627 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct9fb\" (UniqueName: \"kubernetes.io/projected/c2578b35-7408-46ed-bcee-8b0ff114cd33-kube-api-access-ct9fb\") pod \"c2578b35-7408-46ed-bcee-8b0ff114cd33\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.152140 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2578b35-7408-46ed-bcee-8b0ff114cd33-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c2578b35-7408-46ed-bcee-8b0ff114cd33" (UID: "c2578b35-7408-46ed-bcee-8b0ff114cd33"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.162157 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2578b35-7408-46ed-bcee-8b0ff114cd33-kube-api-access-ct9fb" (OuterVolumeSpecName: "kube-api-access-ct9fb") pod "c2578b35-7408-46ed-bcee-8b0ff114cd33" (UID: "c2578b35-7408-46ed-bcee-8b0ff114cd33"). InnerVolumeSpecName "kube-api-access-ct9fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.254326 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2578b35-7408-46ed-bcee-8b0ff114cd33-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.254372 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct9fb\" (UniqueName: \"kubernetes.io/projected/c2578b35-7408-46ed-bcee-8b0ff114cd33-kube-api-access-ct9fb\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.264166 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:40:10 crc kubenswrapper[4985]: E0128 18:40:10.264705 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.414953 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.564915 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-operator-scripts\") pod \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.565234 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjndf\" (UniqueName: \"kubernetes.io/projected/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-kube-api-access-gjndf\") pod \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.566048 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" (UID: "21d5020b-3b33-4e6c-95dd-9aad46d3f0e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.574376 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-kube-api-access-gjndf" (OuterVolumeSpecName: "kube-api-access-gjndf") pod "21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" (UID: "21d5020b-3b33-4e6c-95dd-9aad46d3f0e5"). InnerVolumeSpecName "kube-api-access-gjndf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.668137 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.668175 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjndf\" (UniqueName: \"kubernetes.io/projected/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-kube-api-access-gjndf\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.714917 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-682b-account-create-update-fphsf" event={"ID":"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5","Type":"ContainerDied","Data":"c4744d3e091d5fe137338eff5a0eae180d79a285c04bb7a04b679d4f0af6cc4d"} Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.714966 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4744d3e091d5fe137338eff5a0eae180d79a285c04bb7a04b679d4f0af6cc4d" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.715006 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.721431 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jdztq" event={"ID":"c2578b35-7408-46ed-bcee-8b0ff114cd33","Type":"ContainerDied","Data":"1101d2f73836c5aa3c89354862e987ca0831edcb43c759a121a3fdc6fa8510c0"} Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.721478 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1101d2f73836c5aa3c89354862e987ca0831edcb43c759a121a3fdc6fa8510c0" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.721482 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.725493 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" event={"ID":"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0","Type":"ContainerStarted","Data":"4fa8b90db22baa4c4faa4968579997174ae718c0a3c0ae7654d27d51dc441aa9"} Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.725790 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.728023 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9094cf8a-0196-4d57-9b52-c433eece1088","Type":"ContainerStarted","Data":"8f25a54a639d5802a6dfdddf74cdc99effc77725c8b5d2df0e96ef7e74916b41"} Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.768754 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" podStartSLOduration=7.768728812 podStartE2EDuration="7.768728812s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:10.757105534 +0000 UTC m=+1621.583668355" watchObservedRunningTime="2026-01-28 18:40:10.768728812 +0000 UTC m=+1621.595291633" Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.753070 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36263e10-c8a1-46f3-8fbd-b19bf25c48f5","Type":"ContainerStarted","Data":"8d0763045498cfbdfcd6eb66b00853b414f7ffcc4766f6c81a50d949aa924daf"} Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.753609 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36263e10-c8a1-46f3-8fbd-b19bf25c48f5","Type":"ContainerStarted","Data":"0c746f04d229134099964148a5ac730c73c4e2d018cadac04c5153a47fe141b2"} Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.753328 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-metadata" containerID="cri-o://8d0763045498cfbdfcd6eb66b00853b414f7ffcc4766f6c81a50d949aa924daf" gracePeriod=30 Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.753120 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-log" containerID="cri-o://0c746f04d229134099964148a5ac730c73c4e2d018cadac04c5153a47fe141b2" gracePeriod=30 Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.757664 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9094cf8a-0196-4d57-9b52-c433eece1088","Type":"ContainerStarted","Data":"7fe261f234dfcdbd654880575e2bca2d56695d9b2729b345e61ed3908aa5d15b"} Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.760213 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"adbc3193-99ed-4a75-848b-6b98dfef1d3a","Type":"ContainerStarted","Data":"8e55d982fad1ab9461d4987775a77b35c6b3f7d058a5f2ff32d12ef2930dd72e"} Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.760295 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="adbc3193-99ed-4a75-848b-6b98dfef1d3a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8e55d982fad1ab9461d4987775a77b35c6b3f7d058a5f2ff32d12ef2930dd72e" gracePeriod=30 Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.766891 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b5f547e-c916-40cd-8f40-5fc2b482a4f4","Type":"ContainerStarted","Data":"4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff"} Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.805156 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.121407102 podStartE2EDuration="8.805133563s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="2026-01-28 18:40:05.83807843 +0000 UTC m=+1616.664641251" lastFinishedPulling="2026-01-28 18:40:10.521804901 +0000 UTC m=+1621.348367712" observedRunningTime="2026-01-28 18:40:11.777545954 +0000 UTC m=+1622.604108805" watchObservedRunningTime="2026-01-28 18:40:11.805133563 +0000 UTC m=+1622.631696424" Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.821900 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.7504197870000002 podStartE2EDuration="8.821877145s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="2026-01-28 18:40:05.302126208 +0000 UTC m=+1616.128689029" lastFinishedPulling="2026-01-28 18:40:10.373583526 +0000 UTC m=+1621.200146387" observedRunningTime="2026-01-28 18:40:11.803599529 +0000 UTC m=+1622.630162380" watchObservedRunningTime="2026-01-28 18:40:11.821877145 +0000 UTC m=+1622.648439976" Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.860371 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.798808904 podStartE2EDuration="8.860350202s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="2026-01-28 18:40:05.310803323 +0000 UTC m=+1616.137366144" lastFinishedPulling="2026-01-28 18:40:10.372344621 +0000 UTC m=+1621.198907442" observedRunningTime="2026-01-28 18:40:11.852201861 +0000 UTC m=+1622.678764692" watchObservedRunningTime="2026-01-28 18:40:11.860350202 +0000 UTC m=+1622.686913033" Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.877385 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.80435676 podStartE2EDuration="8.877367442s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="2026-01-28 18:40:05.300027289 +0000 UTC m=+1616.126590110" lastFinishedPulling="2026-01-28 18:40:10.373037971 +0000 UTC m=+1621.199600792" observedRunningTime="2026-01-28 18:40:11.872088543 +0000 UTC m=+1622.698651364" watchObservedRunningTime="2026-01-28 18:40:11.877367442 +0000 UTC m=+1622.703930283" Jan 28 18:40:12 crc kubenswrapper[4985]: I0128 18:40:12.784457 4985 generic.go:334] "Generic (PLEG): container finished" podID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerID="8d0763045498cfbdfcd6eb66b00853b414f7ffcc4766f6c81a50d949aa924daf" exitCode=0 Jan 28 18:40:12 crc kubenswrapper[4985]: I0128 18:40:12.784671 4985 generic.go:334] "Generic (PLEG): container finished" podID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerID="0c746f04d229134099964148a5ac730c73c4e2d018cadac04c5153a47fe141b2" exitCode=143 Jan 28 18:40:12 crc kubenswrapper[4985]: I0128 18:40:12.784904 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36263e10-c8a1-46f3-8fbd-b19bf25c48f5","Type":"ContainerDied","Data":"8d0763045498cfbdfcd6eb66b00853b414f7ffcc4766f6c81a50d949aa924daf"} Jan 28 18:40:12 crc kubenswrapper[4985]: I0128 18:40:12.784972 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36263e10-c8a1-46f3-8fbd-b19bf25c48f5","Type":"ContainerDied","Data":"0c746f04d229134099964148a5ac730c73c4e2d018cadac04c5153a47fe141b2"} Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.069521 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.154359 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9dlf\" (UniqueName: \"kubernetes.io/projected/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-kube-api-access-j9dlf\") pod \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.154573 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-logs\") pod \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.154708 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-combined-ca-bundle\") pod \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.154864 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-config-data\") pod \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.155136 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-logs" (OuterVolumeSpecName: "logs") pod "36263e10-c8a1-46f3-8fbd-b19bf25c48f5" (UID: "36263e10-c8a1-46f3-8fbd-b19bf25c48f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.155845 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.160305 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-kube-api-access-j9dlf" (OuterVolumeSpecName: "kube-api-access-j9dlf") pod "36263e10-c8a1-46f3-8fbd-b19bf25c48f5" (UID: "36263e10-c8a1-46f3-8fbd-b19bf25c48f5"). InnerVolumeSpecName "kube-api-access-j9dlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.192225 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-config-data" (OuterVolumeSpecName: "config-data") pod "36263e10-c8a1-46f3-8fbd-b19bf25c48f5" (UID: "36263e10-c8a1-46f3-8fbd-b19bf25c48f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.209416 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36263e10-c8a1-46f3-8fbd-b19bf25c48f5" (UID: "36263e10-c8a1-46f3-8fbd-b19bf25c48f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.258317 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.258371 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9dlf\" (UniqueName: \"kubernetes.io/projected/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-kube-api-access-j9dlf\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.258386 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.800320 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36263e10-c8a1-46f3-8fbd-b19bf25c48f5","Type":"ContainerDied","Data":"313ea23ef8841271d1f96b426e8d01778c710d0948d5bd636293d482289c28dd"} Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.800633 4985 scope.go:117] "RemoveContainer" containerID="8d0763045498cfbdfcd6eb66b00853b414f7ffcc4766f6c81a50d949aa924daf" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.801992 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.803014 4985 generic.go:334] "Generic (PLEG): container finished" podID="14e43739-91f4-43c9-9b01-5f0574a3b150" containerID="c83af2ab400014fc785ba01cb5de51bf84a3ea8da54f74af11e2f8a7b4d8bbce" exitCode=0 Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.803043 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m82mm" event={"ID":"14e43739-91f4-43c9-9b01-5f0574a3b150","Type":"ContainerDied","Data":"c83af2ab400014fc785ba01cb5de51bf84a3ea8da54f74af11e2f8a7b4d8bbce"} Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.842539 4985 scope.go:117] "RemoveContainer" containerID="0c746f04d229134099964148a5ac730c73c4e2d018cadac04c5153a47fe141b2" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.863908 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.880544 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896065 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:13 crc kubenswrapper[4985]: E0128 18:40:13.896663 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2578b35-7408-46ed-bcee-8b0ff114cd33" containerName="mariadb-database-create" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896685 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2578b35-7408-46ed-bcee-8b0ff114cd33" containerName="mariadb-database-create" Jan 28 18:40:13 crc kubenswrapper[4985]: E0128 18:40:13.896693 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-log" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896701 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-log" Jan 28 18:40:13 crc kubenswrapper[4985]: E0128 18:40:13.896710 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-metadata" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896716 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-metadata" Jan 28 18:40:13 crc kubenswrapper[4985]: E0128 18:40:13.896736 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" containerName="mariadb-account-create-update" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896742 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" containerName="mariadb-account-create-update" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896943 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" containerName="mariadb-account-create-update" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896962 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2578b35-7408-46ed-bcee-8b0ff114cd33" containerName="mariadb-database-create" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896983 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-log" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896992 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-metadata" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.898319 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.901138 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.902914 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.933528 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.956365 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-hgpsv"] Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.957905 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.961149 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.965537 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.966075 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.966198 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bbsjj" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976157 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-config-data\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976279 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-config-data\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976317 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976387 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-combined-ca-bundle\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976418 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsqcz\" (UniqueName: \"kubernetes.io/projected/7decce21-e84c-4501-bf0d-ca01387c51ee-kube-api-access-wsqcz\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976441 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-logs\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976565 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjq87\" (UniqueName: \"kubernetes.io/projected/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-kube-api-access-mjq87\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976628 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976669 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-scripts\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.998874 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-hgpsv"] Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.047577 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.047623 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.079568 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.079702 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-combined-ca-bundle\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.079734 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsqcz\" (UniqueName: \"kubernetes.io/projected/7decce21-e84c-4501-bf0d-ca01387c51ee-kube-api-access-wsqcz\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.079759 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-logs\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.079921 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjq87\" (UniqueName: \"kubernetes.io/projected/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-kube-api-access-mjq87\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.080002 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.080067 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-scripts\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.080123 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-config-data\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.080210 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-config-data\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.080660 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-logs\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.086901 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-scripts\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.087099 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.087902 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-config-data\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.087947 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-config-data\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.088750 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.090553 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.090594 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.090710 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-combined-ca-bundle\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.102231 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsqcz\" (UniqueName: \"kubernetes.io/projected/7decce21-e84c-4501-bf0d-ca01387c51ee-kube-api-access-wsqcz\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.104834 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjq87\" (UniqueName: \"kubernetes.io/projected/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-kube-api-access-mjq87\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.119060 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.138960 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.219778 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.278768 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.840410 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:14 crc kubenswrapper[4985]: W0128 18:40:14.863727 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb3a6db7_1b8e_47a8_8c09_9f13fa2823a2.slice/crio-a80a07711bbf1c6b8d51282102d24275ccb61762be00070c86f2aac16e172c79 WatchSource:0}: Error finding container a80a07711bbf1c6b8d51282102d24275ccb61762be00070c86f2aac16e172c79: Status 404 returned error can't find the container with id a80a07711bbf1c6b8d51282102d24275ccb61762be00070c86f2aac16e172c79 Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.907056 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.057069 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-hgpsv"] Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.132015 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.132043 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.283062 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" path="/var/lib/kubelet/pods/36263e10-c8a1-46f3-8fbd-b19bf25c48f5/volumes" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.508394 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.637337 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-scripts\") pod \"14e43739-91f4-43c9-9b01-5f0574a3b150\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.637717 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh45p\" (UniqueName: \"kubernetes.io/projected/14e43739-91f4-43c9-9b01-5f0574a3b150-kube-api-access-gh45p\") pod \"14e43739-91f4-43c9-9b01-5f0574a3b150\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.637787 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-combined-ca-bundle\") pod \"14e43739-91f4-43c9-9b01-5f0574a3b150\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.638194 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-config-data\") pod \"14e43739-91f4-43c9-9b01-5f0574a3b150\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.644053 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-scripts" (OuterVolumeSpecName: "scripts") pod "14e43739-91f4-43c9-9b01-5f0574a3b150" (UID: "14e43739-91f4-43c9-9b01-5f0574a3b150"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.644560 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e43739-91f4-43c9-9b01-5f0574a3b150-kube-api-access-gh45p" (OuterVolumeSpecName: "kube-api-access-gh45p") pod "14e43739-91f4-43c9-9b01-5f0574a3b150" (UID: "14e43739-91f4-43c9-9b01-5f0574a3b150"). InnerVolumeSpecName "kube-api-access-gh45p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.652758 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.652794 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh45p\" (UniqueName: \"kubernetes.io/projected/14e43739-91f4-43c9-9b01-5f0574a3b150-kube-api-access-gh45p\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.674602 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14e43739-91f4-43c9-9b01-5f0574a3b150" (UID: "14e43739-91f4-43c9-9b01-5f0574a3b150"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.682477 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-config-data" (OuterVolumeSpecName: "config-data") pod "14e43739-91f4-43c9-9b01-5f0574a3b150" (UID: "14e43739-91f4-43c9-9b01-5f0574a3b150"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.755578 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.755613 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.860271 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2","Type":"ContainerStarted","Data":"d832775edbe5a8b07f83ebad75fca90b209d8e1af6fb02d629166107777f9d7b"} Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.860315 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2","Type":"ContainerStarted","Data":"ecc106116f755f35ae88a484ef050965a1fa42c237890edc93409ff54bd6245c"} Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.860326 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2","Type":"ContainerStarted","Data":"a80a07711bbf1c6b8d51282102d24275ccb61762be00070c86f2aac16e172c79"} Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.862400 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hgpsv" event={"ID":"7decce21-e84c-4501-bf0d-ca01387c51ee","Type":"ContainerStarted","Data":"72a3d23c9a572bc420fc7e3eb89dda8941d63c42b0d6a69ff809fa9dea983c2f"} Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.872545 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.872573 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m82mm" event={"ID":"14e43739-91f4-43c9-9b01-5f0574a3b150","Type":"ContainerDied","Data":"c42ea52d09811fa700e48475032c542d4742677726b27e37a6c29c19e54b460e"} Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.872654 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42ea52d09811fa700e48475032c542d4742677726b27e37a6c29c19e54b460e" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.900505 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.900483994 podStartE2EDuration="2.900483994s" podCreationTimestamp="2026-01-28 18:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:15.883862755 +0000 UTC m=+1626.710425576" watchObservedRunningTime="2026-01-28 18:40:15.900483994 +0000 UTC m=+1626.727046815" Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.010541 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.010809 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-log" containerID="cri-o://8f25a54a639d5802a6dfdddf74cdc99effc77725c8b5d2df0e96ef7e74916b41" gracePeriod=30 Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.011518 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-api" containerID="cri-o://7fe261f234dfcdbd654880575e2bca2d56695d9b2729b345e61ed3908aa5d15b" gracePeriod=30 Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.026304 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.077832 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.885614 4985 generic.go:334] "Generic (PLEG): container finished" podID="9094cf8a-0196-4d57-9b52-c433eece1088" containerID="8f25a54a639d5802a6dfdddf74cdc99effc77725c8b5d2df0e96ef7e74916b41" exitCode=143 Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.885674 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9094cf8a-0196-4d57-9b52-c433eece1088","Type":"ContainerDied","Data":"8f25a54a639d5802a6dfdddf74cdc99effc77725c8b5d2df0e96ef7e74916b41"} Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.885772 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" containerName="nova-scheduler-scheduler" containerID="cri-o://4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff" gracePeriod=30 Jan 28 18:40:17 crc kubenswrapper[4985]: I0128 18:40:17.897601 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-log" containerID="cri-o://ecc106116f755f35ae88a484ef050965a1fa42c237890edc93409ff54bd6245c" gracePeriod=30 Jan 28 18:40:17 crc kubenswrapper[4985]: I0128 18:40:17.897689 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-metadata" containerID="cri-o://d832775edbe5a8b07f83ebad75fca90b209d8e1af6fb02d629166107777f9d7b" gracePeriod=30 Jan 28 18:40:18 crc kubenswrapper[4985]: I0128 18:40:18.924506 4985 generic.go:334] "Generic (PLEG): container finished" podID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerID="d832775edbe5a8b07f83ebad75fca90b209d8e1af6fb02d629166107777f9d7b" exitCode=0 Jan 28 18:40:18 crc kubenswrapper[4985]: I0128 18:40:18.924548 4985 generic.go:334] "Generic (PLEG): container finished" podID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerID="ecc106116f755f35ae88a484ef050965a1fa42c237890edc93409ff54bd6245c" exitCode=143 Jan 28 18:40:18 crc kubenswrapper[4985]: I0128 18:40:18.924539 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2","Type":"ContainerDied","Data":"d832775edbe5a8b07f83ebad75fca90b209d8e1af6fb02d629166107777f9d7b"} Jan 28 18:40:18 crc kubenswrapper[4985]: I0128 18:40:18.924600 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2","Type":"ContainerDied","Data":"ecc106116f755f35ae88a484ef050965a1fa42c237890edc93409ff54bd6245c"} Jan 28 18:40:18 crc kubenswrapper[4985]: I0128 18:40:18.927754 4985 generic.go:334] "Generic (PLEG): container finished" podID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" containerID="4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff" exitCode=0 Jan 28 18:40:18 crc kubenswrapper[4985]: I0128 18:40:18.927801 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b5f547e-c916-40cd-8f40-5fc2b482a4f4","Type":"ContainerDied","Data":"4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff"} Jan 28 18:40:19 crc kubenswrapper[4985]: E0128 18:40:19.090753 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff is running failed: container process not found" containerID="4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 18:40:19 crc kubenswrapper[4985]: E0128 18:40:19.091135 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff is running failed: container process not found" containerID="4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 18:40:19 crc kubenswrapper[4985]: E0128 18:40:19.091555 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff is running failed: container process not found" containerID="4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 18:40:19 crc kubenswrapper[4985]: E0128 18:40:19.091619 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" containerName="nova-scheduler-scheduler" Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.220659 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.220704 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.434460 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.508619 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-v8wbr"] Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.508846 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerName="dnsmasq-dns" containerID="cri-o://1c42c60ea57a6197ce6f5b78eaab66b518ac9296d9bfa8c605b8d293dcd46e71" gracePeriod=10 Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.944191 4985 generic.go:334] "Generic (PLEG): container finished" podID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerID="1c42c60ea57a6197ce6f5b78eaab66b518ac9296d9bfa8c605b8d293dcd46e71" exitCode=0 Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.944333 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" event={"ID":"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b","Type":"ContainerDied","Data":"1c42c60ea57a6197ce6f5b78eaab66b518ac9296d9bfa8c605b8d293dcd46e71"} Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.947700 4985 generic.go:334] "Generic (PLEG): container finished" podID="dc545ce7-58a7-4757-8eab-8b0a28570a49" containerID="5fa6b37534633df411a4bdc3fa77962a9df43667fb32532c9621de45df63d178" exitCode=0 Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.947743 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" event={"ID":"dc545ce7-58a7-4757-8eab-8b0a28570a49","Type":"ContainerDied","Data":"5fa6b37534633df411a4bdc3fa77962a9df43667fb32532c9621de45df63d178"} Jan 28 18:40:20 crc kubenswrapper[4985]: I0128 18:40:20.971958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" event={"ID":"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b","Type":"ContainerDied","Data":"124e40d06c3bc6dec66768ab9299f6ec41b3437c9591832dd7f81dc8a3da2106"} Jan 28 18:40:20 crc kubenswrapper[4985]: I0128 18:40:20.972283 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="124e40d06c3bc6dec66768ab9299f6ec41b3437c9591832dd7f81dc8a3da2106" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.055592 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.074130 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.099831 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.100816 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-svc\") pod \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.100950 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-logs\") pod \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.100989 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-swift-storage-0\") pod \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101050 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjq87\" (UniqueName: \"kubernetes.io/projected/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-kube-api-access-mjq87\") pod \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101075 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-combined-ca-bundle\") pod \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101138 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-config-data\") pod \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101231 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-sb\") pod \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101288 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-nova-metadata-tls-certs\") pod \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101385 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfhgv\" (UniqueName: \"kubernetes.io/projected/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-kube-api-access-xfhgv\") pod \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101425 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-nb\") pod \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101518 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-config\") pod \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.106949 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-logs" (OuterVolumeSpecName: "logs") pod "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" (UID: "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.120526 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-kube-api-access-xfhgv" (OuterVolumeSpecName: "kube-api-access-xfhgv") pod "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" (UID: "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b"). InnerVolumeSpecName "kube-api-access-xfhgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.122991 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-kube-api-access-mjq87" (OuterVolumeSpecName: "kube-api-access-mjq87") pod "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" (UID: "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2"). InnerVolumeSpecName "kube-api-access-mjq87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.205072 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-config-data\") pod \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.205126 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcg58\" (UniqueName: \"kubernetes.io/projected/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-kube-api-access-wcg58\") pod \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.205142 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-combined-ca-bundle\") pod \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.205860 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfhgv\" (UniqueName: \"kubernetes.io/projected/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-kube-api-access-xfhgv\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.205876 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.205886 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjq87\" (UniqueName: \"kubernetes.io/projected/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-kube-api-access-mjq87\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.210872 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-kube-api-access-wcg58" (OuterVolumeSpecName: "kube-api-access-wcg58") pod "0b5f547e-c916-40cd-8f40-5fc2b482a4f4" (UID: "0b5f547e-c916-40cd-8f40-5fc2b482a4f4"). InnerVolumeSpecName "kube-api-access-wcg58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.237578 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" (UID: "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.259996 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-config-data" (OuterVolumeSpecName: "config-data") pod "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" (UID: "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.260440 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-config" (OuterVolumeSpecName: "config") pod "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" (UID: "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.290400 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" (UID: "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.302433 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" (UID: "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.322907 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.322948 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.322965 4985 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.322979 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcg58\" (UniqueName: \"kubernetes.io/projected/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-kube-api-access-wcg58\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.322989 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.322998 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.324569 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" (UID: "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.334092 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-config-data" (OuterVolumeSpecName: "config-data") pod "0b5f547e-c916-40cd-8f40-5fc2b482a4f4" (UID: "0b5f547e-c916-40cd-8f40-5fc2b482a4f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.338774 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b5f547e-c916-40cd-8f40-5fc2b482a4f4" (UID: "0b5f547e-c916-40cd-8f40-5fc2b482a4f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.339776 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" (UID: "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.344237 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" (UID: "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.425795 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.426030 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.426096 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.426150 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.426202 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.686441 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.834936 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-config-data\") pod \"dc545ce7-58a7-4757-8eab-8b0a28570a49\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.835166 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-combined-ca-bundle\") pod \"dc545ce7-58a7-4757-8eab-8b0a28570a49\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.835359 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-scripts\") pod \"dc545ce7-58a7-4757-8eab-8b0a28570a49\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.835479 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8d9q\" (UniqueName: \"kubernetes.io/projected/dc545ce7-58a7-4757-8eab-8b0a28570a49-kube-api-access-z8d9q\") pod \"dc545ce7-58a7-4757-8eab-8b0a28570a49\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.839192 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc545ce7-58a7-4757-8eab-8b0a28570a49-kube-api-access-z8d9q" (OuterVolumeSpecName: "kube-api-access-z8d9q") pod "dc545ce7-58a7-4757-8eab-8b0a28570a49" (UID: "dc545ce7-58a7-4757-8eab-8b0a28570a49"). InnerVolumeSpecName "kube-api-access-z8d9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.839608 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-scripts" (OuterVolumeSpecName: "scripts") pod "dc545ce7-58a7-4757-8eab-8b0a28570a49" (UID: "dc545ce7-58a7-4757-8eab-8b0a28570a49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.866394 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-config-data" (OuterVolumeSpecName: "config-data") pod "dc545ce7-58a7-4757-8eab-8b0a28570a49" (UID: "dc545ce7-58a7-4757-8eab-8b0a28570a49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.887600 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc545ce7-58a7-4757-8eab-8b0a28570a49" (UID: "dc545ce7-58a7-4757-8eab-8b0a28570a49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.938422 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.938452 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.938463 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.938471 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8d9q\" (UniqueName: \"kubernetes.io/projected/dc545ce7-58a7-4757-8eab-8b0a28570a49-kube-api-access-z8d9q\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.990881 4985 generic.go:334] "Generic (PLEG): container finished" podID="9094cf8a-0196-4d57-9b52-c433eece1088" containerID="7fe261f234dfcdbd654880575e2bca2d56695d9b2729b345e61ed3908aa5d15b" exitCode=0 Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.990954 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9094cf8a-0196-4d57-9b52-c433eece1088","Type":"ContainerDied","Data":"7fe261f234dfcdbd654880575e2bca2d56695d9b2729b345e61ed3908aa5d15b"} Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.994654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" event={"ID":"dc545ce7-58a7-4757-8eab-8b0a28570a49","Type":"ContainerDied","Data":"bcdb112668eaf0b473e2a3decc00678922c53936b28b53ec4075246a540a99e9"} Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.994692 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcdb112668eaf0b473e2a3decc00678922c53936b28b53ec4075246a540a99e9" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.994762 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.997818 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b5f547e-c916-40cd-8f40-5fc2b482a4f4","Type":"ContainerDied","Data":"d67f49419ddc18736265dbf8231bcf89cd6ee9def418fabf88a409ff0a470ae3"} Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.997879 4985 scope.go:117] "RemoveContainer" containerID="4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.998070 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.043422 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2","Type":"ContainerDied","Data":"a80a07711bbf1c6b8d51282102d24275ccb61762be00070c86f2aac16e172c79"} Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.043565 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.049747 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.062544 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.067854 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.068611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hgpsv" event={"ID":"7decce21-e84c-4501-bf0d-ca01387c51ee","Type":"ContainerStarted","Data":"6c205ff1c9724512d656b6452f88a456eabb29c117c2d744ca2a5dce502105d6"} Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.086407 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087072 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerName="init" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087090 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerName="init" Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087120 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-log" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087128 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-log" Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087160 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-metadata" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087169 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-metadata" Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087191 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerName="dnsmasq-dns" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087198 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerName="dnsmasq-dns" Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087207 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc545ce7-58a7-4757-8eab-8b0a28570a49" containerName="nova-cell1-conductor-db-sync" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087214 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc545ce7-58a7-4757-8eab-8b0a28570a49" containerName="nova-cell1-conductor-db-sync" Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087244 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e43739-91f4-43c9-9b01-5f0574a3b150" containerName="nova-manage" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087272 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e43739-91f4-43c9-9b01-5f0574a3b150" containerName="nova-manage" Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087289 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" containerName="nova-scheduler-scheduler" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087296 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" containerName="nova-scheduler-scheduler" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087630 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-log" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087649 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerName="dnsmasq-dns" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087670 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e43739-91f4-43c9-9b01-5f0574a3b150" containerName="nova-manage" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087684 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-metadata" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087704 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc545ce7-58a7-4757-8eab-8b0a28570a49" containerName="nova-cell1-conductor-db-sync" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087713 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" containerName="nova-scheduler-scheduler" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.088794 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.088818 4985 scope.go:117] "RemoveContainer" containerID="d832775edbe5a8b07f83ebad75fca90b209d8e1af6fb02d629166107777f9d7b" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.092244 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.105081 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.115618 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.118936 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.142216 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.149090 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.149483 4985 scope.go:117] "RemoveContainer" containerID="ecc106116f755f35ae88a484ef050965a1fa42c237890edc93409ff54bd6245c" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.161261 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.178129 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.201124 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.202451 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-hgpsv" podStartSLOduration=3.520269022 podStartE2EDuration="9.202436173s" podCreationTimestamp="2026-01-28 18:40:13 +0000 UTC" firstStartedPulling="2026-01-28 18:40:15.059653975 +0000 UTC m=+1625.886216796" lastFinishedPulling="2026-01-28 18:40:20.741821126 +0000 UTC m=+1631.568383947" observedRunningTime="2026-01-28 18:40:22.11486642 +0000 UTC m=+1632.941429241" watchObservedRunningTime="2026-01-28 18:40:22.202436173 +0000 UTC m=+1633.028998994" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.204749 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.209516 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.209765 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.237782 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.250737 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xphwl\" (UniqueName: \"kubernetes.io/projected/938ef95c-9a4f-4f1e-b92c-8c16f0043102-kube-api-access-xphwl\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.250795 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb020dd-95f1-4d78-9899-9fd0eca60584-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.250930 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb020dd-95f1-4d78-9899-9fd0eca60584-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.250988 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.251020 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6rg\" (UniqueName: \"kubernetes.io/projected/bbb020dd-95f1-4d78-9899-9fd0eca60584-kube-api-access-fm6rg\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.251053 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-config-data\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.259346 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-v8wbr"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.270420 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-v8wbr"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.335625 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.363781 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.363873 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm6rg\" (UniqueName: \"kubernetes.io/projected/bbb020dd-95f1-4d78-9899-9fd0eca60584-kube-api-access-fm6rg\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.363944 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-config-data\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364130 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xphwl\" (UniqueName: \"kubernetes.io/projected/938ef95c-9a4f-4f1e-b92c-8c16f0043102-kube-api-access-xphwl\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364182 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aa1f962-f78d-41dc-a567-7c749f53ce57-logs\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364227 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb020dd-95f1-4d78-9899-9fd0eca60584-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364413 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-config-data\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364445 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r4dz\" (UniqueName: \"kubernetes.io/projected/9aa1f962-f78d-41dc-a567-7c749f53ce57-kube-api-access-2r4dz\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364576 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364635 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb020dd-95f1-4d78-9899-9fd0eca60584-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364704 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.370350 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.370422 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb020dd-95f1-4d78-9899-9fd0eca60584-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.404918 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb020dd-95f1-4d78-9899-9fd0eca60584-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.407036 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm6rg\" (UniqueName: \"kubernetes.io/projected/bbb020dd-95f1-4d78-9899-9fd0eca60584-kube-api-access-fm6rg\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.420821 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.427441 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-config-data\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.428644 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xphwl\" (UniqueName: \"kubernetes.io/projected/938ef95c-9a4f-4f1e-b92c-8c16f0043102-kube-api-access-xphwl\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.458706 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.466018 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvpbb\" (UniqueName: \"kubernetes.io/projected/9094cf8a-0196-4d57-9b52-c433eece1088-kube-api-access-jvpbb\") pod \"9094cf8a-0196-4d57-9b52-c433eece1088\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.466180 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9094cf8a-0196-4d57-9b52-c433eece1088-logs\") pod \"9094cf8a-0196-4d57-9b52-c433eece1088\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.466425 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-config-data\") pod \"9094cf8a-0196-4d57-9b52-c433eece1088\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.466468 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-combined-ca-bundle\") pod \"9094cf8a-0196-4d57-9b52-c433eece1088\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.466889 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aa1f962-f78d-41dc-a567-7c749f53ce57-logs\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.467022 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-config-data\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.467049 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r4dz\" (UniqueName: \"kubernetes.io/projected/9aa1f962-f78d-41dc-a567-7c749f53ce57-kube-api-access-2r4dz\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.467095 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.467122 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.468186 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9094cf8a-0196-4d57-9b52-c433eece1088-logs" (OuterVolumeSpecName: "logs") pod "9094cf8a-0196-4d57-9b52-c433eece1088" (UID: "9094cf8a-0196-4d57-9b52-c433eece1088"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.469225 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aa1f962-f78d-41dc-a567-7c749f53ce57-logs\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.474393 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-config-data\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.476345 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.479869 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9094cf8a-0196-4d57-9b52-c433eece1088-kube-api-access-jvpbb" (OuterVolumeSpecName: "kube-api-access-jvpbb") pod "9094cf8a-0196-4d57-9b52-c433eece1088" (UID: "9094cf8a-0196-4d57-9b52-c433eece1088"). InnerVolumeSpecName "kube-api-access-jvpbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.486595 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.489723 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r4dz\" (UniqueName: \"kubernetes.io/projected/9aa1f962-f78d-41dc-a567-7c749f53ce57-kube-api-access-2r4dz\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.523241 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-config-data" (OuterVolumeSpecName: "config-data") pod "9094cf8a-0196-4d57-9b52-c433eece1088" (UID: "9094cf8a-0196-4d57-9b52-c433eece1088"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.523647 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9094cf8a-0196-4d57-9b52-c433eece1088" (UID: "9094cf8a-0196-4d57-9b52-c433eece1088"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.541120 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.575543 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.575607 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.575625 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvpbb\" (UniqueName: \"kubernetes.io/projected/9094cf8a-0196-4d57-9b52-c433eece1088-kube-api-access-jvpbb\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.575635 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9094cf8a-0196-4d57-9b52-c433eece1088-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.000028 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.090457 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9094cf8a-0196-4d57-9b52-c433eece1088","Type":"ContainerDied","Data":"6ab1f97ac874b54ef01c0179a3153dd1ba3d40d00482df2197af30281a5558ed"} Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.090821 4985 scope.go:117] "RemoveContainer" containerID="7fe261f234dfcdbd654880575e2bca2d56695d9b2729b345e61ed3908aa5d15b" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.090486 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: W0128 18:40:23.097900 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod938ef95c_9a4f_4f1e_b92c_8c16f0043102.slice/crio-8d462a40beef6fc701ba91c721938ba8a5ec0c9999812346c5f163a3e951b156 WatchSource:0}: Error finding container 8d462a40beef6fc701ba91c721938ba8a5ec0c9999812346c5f163a3e951b156: Status 404 returned error can't find the container with id 8d462a40beef6fc701ba91c721938ba8a5ec0c9999812346c5f163a3e951b156 Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.103998 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.106488 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bbb020dd-95f1-4d78-9899-9fd0eca60584","Type":"ContainerStarted","Data":"9cbc86b78469c4374a4f308e99f249b09f17f57a89721e7f8fdda83780cf8762"} Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.190983 4985 scope.go:117] "RemoveContainer" containerID="8f25a54a639d5802a6dfdddf74cdc99effc77725c8b5d2df0e96ef7e74916b41" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.246627 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.260496 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.292594 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" path="/var/lib/kubelet/pods/0b5f547e-c916-40cd-8f40-5fc2b482a4f4/volumes" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.293469 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" path="/var/lib/kubelet/pods/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b/volumes" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.294410 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" path="/var/lib/kubelet/pods/9094cf8a-0196-4d57-9b52-c433eece1088/volumes" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.299460 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" path="/var/lib/kubelet/pods/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2/volumes" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.300452 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: E0128 18:40:23.301641 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-log" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.301731 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-log" Jan 28 18:40:23 crc kubenswrapper[4985]: E0128 18:40:23.301812 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-api" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.301875 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-api" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.302279 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-log" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.302419 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-api" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.303975 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.304148 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.312358 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.313622 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.411990 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cdf54b-14dd-4844-bb8c-b68794fba1b9-logs\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.412061 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.412105 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-config-data\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.412132 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clxv7\" (UniqueName: \"kubernetes.io/projected/72cdf54b-14dd-4844-bb8c-b68794fba1b9-kube-api-access-clxv7\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.514927 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cdf54b-14dd-4844-bb8c-b68794fba1b9-logs\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.515479 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.515640 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cdf54b-14dd-4844-bb8c-b68794fba1b9-logs\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.516519 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-config-data\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.516563 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clxv7\" (UniqueName: \"kubernetes.io/projected/72cdf54b-14dd-4844-bb8c-b68794fba1b9-kube-api-access-clxv7\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.520374 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.520591 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-config-data\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.540411 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clxv7\" (UniqueName: \"kubernetes.io/projected/72cdf54b-14dd-4844-bb8c-b68794fba1b9-kube-api-access-clxv7\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.632341 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.129361 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa1f962-f78d-41dc-a567-7c749f53ce57","Type":"ContainerStarted","Data":"a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9"} Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.129668 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa1f962-f78d-41dc-a567-7c749f53ce57","Type":"ContainerStarted","Data":"dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937"} Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.129678 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa1f962-f78d-41dc-a567-7c749f53ce57","Type":"ContainerStarted","Data":"beb681875d1b031fab542c0f8d59f502b25e7da8eb5f0f02c317251a2c3309d0"} Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.144820 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.145203 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"938ef95c-9a4f-4f1e-b92c-8c16f0043102","Type":"ContainerStarted","Data":"047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2"} Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.145273 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"938ef95c-9a4f-4f1e-b92c-8c16f0043102","Type":"ContainerStarted","Data":"8d462a40beef6fc701ba91c721938ba8a5ec0c9999812346c5f163a3e951b156"} Jan 28 18:40:24 crc kubenswrapper[4985]: W0128 18:40:24.159842 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72cdf54b_14dd_4844_bb8c_b68794fba1b9.slice/crio-afeb7e343ebc16ce5060f2783d896f767c20813419a24762ce1683493a801f47 WatchSource:0}: Error finding container afeb7e343ebc16ce5060f2783d896f767c20813419a24762ce1683493a801f47: Status 404 returned error can't find the container with id afeb7e343ebc16ce5060f2783d896f767c20813419a24762ce1683493a801f47 Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.159896 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bbb020dd-95f1-4d78-9899-9fd0eca60584","Type":"ContainerStarted","Data":"dc8c534822edfe9eb8afcfdb5fd500622fdb8c6873115d966342dd4d21ddfd06"} Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.160618 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.161917 4985 generic.go:334] "Generic (PLEG): container finished" podID="7decce21-e84c-4501-bf0d-ca01387c51ee" containerID="6c205ff1c9724512d656b6452f88a456eabb29c117c2d744ca2a5dce502105d6" exitCode=0 Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.161964 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hgpsv" event={"ID":"7decce21-e84c-4501-bf0d-ca01387c51ee","Type":"ContainerDied","Data":"6c205ff1c9724512d656b6452f88a456eabb29c117c2d744ca2a5dce502105d6"} Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.170740 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.170716631 podStartE2EDuration="2.170716631s" podCreationTimestamp="2026-01-28 18:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:24.152349082 +0000 UTC m=+1634.978911903" watchObservedRunningTime="2026-01-28 18:40:24.170716631 +0000 UTC m=+1634.997279452" Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.189882 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.189859762 podStartE2EDuration="2.189859762s" podCreationTimestamp="2026-01-28 18:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:24.17279601 +0000 UTC m=+1634.999358831" watchObservedRunningTime="2026-01-28 18:40:24.189859762 +0000 UTC m=+1635.016422583" Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.210027 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.2100054 podStartE2EDuration="2.2100054s" podCreationTimestamp="2026-01-28 18:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:24.202508429 +0000 UTC m=+1635.029071250" watchObservedRunningTime="2026-01-28 18:40:24.2100054 +0000 UTC m=+1635.036568221" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.179701 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72cdf54b-14dd-4844-bb8c-b68794fba1b9","Type":"ContainerStarted","Data":"5ddbcefbcd9d03f983d9329ae2dee80e9b1046c773fa3fc54838926cf067667d"} Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.182096 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72cdf54b-14dd-4844-bb8c-b68794fba1b9","Type":"ContainerStarted","Data":"6400694cb09a2eb35a99c8f2620bc42af5a434bb4e4c9f3a4165d20445332e54"} Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.182128 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72cdf54b-14dd-4844-bb8c-b68794fba1b9","Type":"ContainerStarted","Data":"afeb7e343ebc16ce5060f2783d896f767c20813419a24762ce1683493a801f47"} Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.216119 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.216051804 podStartE2EDuration="2.216051804s" podCreationTimestamp="2026-01-28 18:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:25.208676625 +0000 UTC m=+1636.035239446" watchObservedRunningTime="2026-01-28 18:40:25.216051804 +0000 UTC m=+1636.042614625" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.263802 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:40:25 crc kubenswrapper[4985]: E0128 18:40:25.264104 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.666703 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.783209 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-config-data\") pod \"7decce21-e84c-4501-bf0d-ca01387c51ee\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.783595 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsqcz\" (UniqueName: \"kubernetes.io/projected/7decce21-e84c-4501-bf0d-ca01387c51ee-kube-api-access-wsqcz\") pod \"7decce21-e84c-4501-bf0d-ca01387c51ee\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.783742 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-scripts\") pod \"7decce21-e84c-4501-bf0d-ca01387c51ee\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.783899 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-combined-ca-bundle\") pod \"7decce21-e84c-4501-bf0d-ca01387c51ee\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.789725 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7decce21-e84c-4501-bf0d-ca01387c51ee-kube-api-access-wsqcz" (OuterVolumeSpecName: "kube-api-access-wsqcz") pod "7decce21-e84c-4501-bf0d-ca01387c51ee" (UID: "7decce21-e84c-4501-bf0d-ca01387c51ee"). InnerVolumeSpecName "kube-api-access-wsqcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.795933 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-scripts" (OuterVolumeSpecName: "scripts") pod "7decce21-e84c-4501-bf0d-ca01387c51ee" (UID: "7decce21-e84c-4501-bf0d-ca01387c51ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.820758 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7decce21-e84c-4501-bf0d-ca01387c51ee" (UID: "7decce21-e84c-4501-bf0d-ca01387c51ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.833007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-config-data" (OuterVolumeSpecName: "config-data") pod "7decce21-e84c-4501-bf0d-ca01387c51ee" (UID: "7decce21-e84c-4501-bf0d-ca01387c51ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.886773 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.886816 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsqcz\" (UniqueName: \"kubernetes.io/projected/7decce21-e84c-4501-bf0d-ca01387c51ee-kube-api-access-wsqcz\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.886833 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.886845 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:26 crc kubenswrapper[4985]: I0128 18:40:26.194648 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hgpsv" event={"ID":"7decce21-e84c-4501-bf0d-ca01387c51ee","Type":"ContainerDied","Data":"72a3d23c9a572bc420fc7e3eb89dda8941d63c42b0d6a69ff809fa9dea983c2f"} Jan 28 18:40:26 crc kubenswrapper[4985]: I0128 18:40:26.194705 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72a3d23c9a572bc420fc7e3eb89dda8941d63c42b0d6a69ff809fa9dea983c2f" Jan 28 18:40:26 crc kubenswrapper[4985]: I0128 18:40:26.194720 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:27 crc kubenswrapper[4985]: I0128 18:40:27.459444 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 18:40:27 crc kubenswrapper[4985]: I0128 18:40:27.542449 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:40:27 crc kubenswrapper[4985]: I0128 18:40:27.542507 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:40:28 crc kubenswrapper[4985]: I0128 18:40:28.076191 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.190604 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 28 18:40:29 crc kubenswrapper[4985]: E0128 18:40:29.192197 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7decce21-e84c-4501-bf0d-ca01387c51ee" containerName="aodh-db-sync" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.192227 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7decce21-e84c-4501-bf0d-ca01387c51ee" containerName="aodh-db-sync" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.192842 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7decce21-e84c-4501-bf0d-ca01387c51ee" containerName="aodh-db-sync" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.216954 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.223030 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.223099 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bbsjj" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.223528 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.260939 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.265358 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.265466 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-scripts\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.265582 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c598\" (UniqueName: \"kubernetes.io/projected/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-kube-api-access-2c598\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.265805 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-config-data\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.368058 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-scripts\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.373743 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-scripts\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.374761 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c598\" (UniqueName: \"kubernetes.io/projected/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-kube-api-access-2c598\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.374818 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-config-data\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.375164 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.382982 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-config-data\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.398569 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.406953 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c598\" (UniqueName: \"kubernetes.io/projected/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-kube-api-access-2c598\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.542399 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:40:30 crc kubenswrapper[4985]: I0128 18:40:30.295315 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 18:40:31 crc kubenswrapper[4985]: I0128 18:40:31.283615 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerStarted","Data":"cb1badf43fc5d99f4394e22eeadf7de3507d22dd49f7bc8d099cbb13b55d6eea"} Jan 28 18:40:31 crc kubenswrapper[4985]: I0128 18:40:31.284199 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerStarted","Data":"0e67457eae33c25cf3a4581aecdd202fe5ea7cb4f78ba1758d22e2ed33abfd6b"} Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.459227 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.467323 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.511061 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.542673 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.542725 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.933789 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.934212 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-central-agent" containerID="cri-o://5843e8333b06785c57f83f1e4a0e1c4f7b7edb61800eb50282cf92c2c7396e5a" gracePeriod=30 Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.934597 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="proxy-httpd" containerID="cri-o://b9e54c9390ac19ce9b01014af01e84d06209440198802b57b8ed1093cd72b389" gracePeriod=30 Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.934686 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-notification-agent" containerID="cri-o://e830fa21da31aadc107ffb13c5dbc7439288531948ea73e3c3675b37b51f9caa" gracePeriod=30 Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.934711 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="sg-core" containerID="cri-o://d31d4e4526cabd5446579b90e6e8ebe04239de7add61e7534b84bdc949e7941b" gracePeriod=30 Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.308824 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerStarted","Data":"5fe594e43016038bb82553490c959e421cf981ca7b939b3fb56693d76b19142d"} Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.312073 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerID="b9e54c9390ac19ce9b01014af01e84d06209440198802b57b8ed1093cd72b389" exitCode=0 Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.312106 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerID="d31d4e4526cabd5446579b90e6e8ebe04239de7add61e7534b84bdc949e7941b" exitCode=2 Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.312139 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerDied","Data":"b9e54c9390ac19ce9b01014af01e84d06209440198802b57b8ed1093cd72b389"} Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.312198 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerDied","Data":"d31d4e4526cabd5446579b90e6e8ebe04239de7add61e7534b84bdc949e7941b"} Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.380806 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.593459 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.593504 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.633173 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.633231 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:40:34 crc kubenswrapper[4985]: I0128 18:40:34.325922 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerID="e830fa21da31aadc107ffb13c5dbc7439288531948ea73e3c3675b37b51f9caa" exitCode=0 Jan 28 18:40:34 crc kubenswrapper[4985]: I0128 18:40:34.326199 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerID="5843e8333b06785c57f83f1e4a0e1c4f7b7edb61800eb50282cf92c2c7396e5a" exitCode=0 Jan 28 18:40:34 crc kubenswrapper[4985]: I0128 18:40:34.325998 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerDied","Data":"e830fa21da31aadc107ffb13c5dbc7439288531948ea73e3c3675b37b51f9caa"} Jan 28 18:40:34 crc kubenswrapper[4985]: I0128 18:40:34.326240 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerDied","Data":"5843e8333b06785c57f83f1e4a0e1c4f7b7edb61800eb50282cf92c2c7396e5a"} Jan 28 18:40:34 crc kubenswrapper[4985]: I0128 18:40:34.722593 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:40:34 crc kubenswrapper[4985]: I0128 18:40:34.722615 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.760838 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.849884 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-run-httpd\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.849972 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-sg-core-conf-yaml\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.850161 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-scripts\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.850352 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-log-httpd\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.850401 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-combined-ca-bundle\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.850435 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-config-data\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.850501 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkjmm\" (UniqueName: \"kubernetes.io/projected/4bf14558-3072-45a9-bf6c-66d42c26bb42-kube-api-access-gkjmm\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.850516 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.851057 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.852084 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.852112 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.867054 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf14558-3072-45a9-bf6c-66d42c26bb42-kube-api-access-gkjmm" (OuterVolumeSpecName: "kube-api-access-gkjmm") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "kube-api-access-gkjmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.869419 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-scripts" (OuterVolumeSpecName: "scripts") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.922581 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.954358 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkjmm\" (UniqueName: \"kubernetes.io/projected/4bf14558-3072-45a9-bf6c-66d42c26bb42-kube-api-access-gkjmm\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.954398 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.954409 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.992154 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.018336 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-config-data" (OuterVolumeSpecName: "config-data") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.056656 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.056689 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.371611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerDied","Data":"cda0d3d7eb455e4b9ead99374175951ce213d2d28aa9402eeb2c7090c5991dcb"} Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.371670 4985 scope.go:117] "RemoveContainer" containerID="b9e54c9390ac19ce9b01014af01e84d06209440198802b57b8ed1093cd72b389" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.371848 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.438296 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.464112 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.484749 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:36 crc kubenswrapper[4985]: E0128 18:40:36.485494 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="proxy-httpd" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485518 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="proxy-httpd" Jan 28 18:40:36 crc kubenswrapper[4985]: E0128 18:40:36.485530 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-notification-agent" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485538 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-notification-agent" Jan 28 18:40:36 crc kubenswrapper[4985]: E0128 18:40:36.485571 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-central-agent" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485579 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-central-agent" Jan 28 18:40:36 crc kubenswrapper[4985]: E0128 18:40:36.485610 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="sg-core" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485620 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="sg-core" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485900 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-notification-agent" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485929 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="proxy-httpd" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485950 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="sg-core" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485981 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-central-agent" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.489554 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.495820 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.504942 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.512956 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.542307 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674294 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-log-httpd\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674375 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-config-data\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674449 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxpb5\" (UniqueName: \"kubernetes.io/projected/8480417c-9ea7-4d07-bcbd-7734e301a0c6-kube-api-access-gxpb5\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674477 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-scripts\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674519 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674576 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674835 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-run-httpd\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.745847 4985 scope.go:117] "RemoveContainer" containerID="d31d4e4526cabd5446579b90e6e8ebe04239de7add61e7534b84bdc949e7941b" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.776804 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-run-httpd\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.776919 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-log-httpd\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.776962 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-config-data\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.776991 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxpb5\" (UniqueName: \"kubernetes.io/projected/8480417c-9ea7-4d07-bcbd-7734e301a0c6-kube-api-access-gxpb5\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.777010 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-scripts\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.777037 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.777066 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.780743 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-run-httpd\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.780913 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-log-httpd\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.788607 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.794517 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-scripts\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.802482 4985 scope.go:117] "RemoveContainer" containerID="e830fa21da31aadc107ffb13c5dbc7439288531948ea73e3c3675b37b51f9caa" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.803338 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.803542 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-config-data\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.808330 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxpb5\" (UniqueName: \"kubernetes.io/projected/8480417c-9ea7-4d07-bcbd-7734e301a0c6-kube-api-access-gxpb5\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.824049 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.845145 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.845368 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b4b8dd73-ff4d-44d3-b30f-a994e993392d" containerName="kube-state-metrics" containerID="cri-o://926ee0d9744c84d616cdd1efef14930926916bccab52a9fc5bcb156c80c24d29" gracePeriod=30 Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.009216 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.009687 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="558a195a-5deb-441a-9eeb-9e506f49597e" containerName="mysqld-exporter" containerID="cri-o://fb245cebe475dc743941a7a591f70b9acf915655a7047e5c0f3798d225e1d296" gracePeriod=30 Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.191188 4985 scope.go:117] "RemoveContainer" containerID="5843e8333b06785c57f83f1e4a0e1c4f7b7edb61800eb50282cf92c2c7396e5a" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.280664 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" path="/var/lib/kubelet/pods/4bf14558-3072-45a9-bf6c-66d42c26bb42/volumes" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.388760 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerStarted","Data":"45ae2f94d58662256dd9e3846658d96a9b1c7b7c477db901916e216192ebd2f3"} Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.391098 4985 generic.go:334] "Generic (PLEG): container finished" podID="b4b8dd73-ff4d-44d3-b30f-a994e993392d" containerID="926ee0d9744c84d616cdd1efef14930926916bccab52a9fc5bcb156c80c24d29" exitCode=2 Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.391181 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4b8dd73-ff4d-44d3-b30f-a994e993392d","Type":"ContainerDied","Data":"926ee0d9744c84d616cdd1efef14930926916bccab52a9fc5bcb156c80c24d29"} Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.398334 4985 generic.go:334] "Generic (PLEG): container finished" podID="558a195a-5deb-441a-9eeb-9e506f49597e" containerID="fb245cebe475dc743941a7a591f70b9acf915655a7047e5c0f3798d225e1d296" exitCode=2 Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.398387 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"558a195a-5deb-441a-9eeb-9e506f49597e","Type":"ContainerDied","Data":"fb245cebe475dc743941a7a591f70b9acf915655a7047e5c0f3798d225e1d296"} Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.555290 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.691016 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.699707 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45mg6\" (UniqueName: \"kubernetes.io/projected/b4b8dd73-ff4d-44d3-b30f-a994e993392d-kube-api-access-45mg6\") pod \"b4b8dd73-ff4d-44d3-b30f-a994e993392d\" (UID: \"b4b8dd73-ff4d-44d3-b30f-a994e993392d\") " Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.706935 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b8dd73-ff4d-44d3-b30f-a994e993392d-kube-api-access-45mg6" (OuterVolumeSpecName: "kube-api-access-45mg6") pod "b4b8dd73-ff4d-44d3-b30f-a994e993392d" (UID: "b4b8dd73-ff4d-44d3-b30f-a994e993392d"). InnerVolumeSpecName "kube-api-access-45mg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.804016 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45mg6\" (UniqueName: \"kubernetes.io/projected/b4b8dd73-ff4d-44d3-b30f-a994e993392d-kube-api-access-45mg6\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.815859 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.905748 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-combined-ca-bundle\") pod \"558a195a-5deb-441a-9eeb-9e506f49597e\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.905825 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8sjf\" (UniqueName: \"kubernetes.io/projected/558a195a-5deb-441a-9eeb-9e506f49597e-kube-api-access-q8sjf\") pod \"558a195a-5deb-441a-9eeb-9e506f49597e\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.905882 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-config-data\") pod \"558a195a-5deb-441a-9eeb-9e506f49597e\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.922454 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/558a195a-5deb-441a-9eeb-9e506f49597e-kube-api-access-q8sjf" (OuterVolumeSpecName: "kube-api-access-q8sjf") pod "558a195a-5deb-441a-9eeb-9e506f49597e" (UID: "558a195a-5deb-441a-9eeb-9e506f49597e"). InnerVolumeSpecName "kube-api-access-q8sjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.933907 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "558a195a-5deb-441a-9eeb-9e506f49597e" (UID: "558a195a-5deb-441a-9eeb-9e506f49597e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.980510 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-config-data" (OuterVolumeSpecName: "config-data") pod "558a195a-5deb-441a-9eeb-9e506f49597e" (UID: "558a195a-5deb-441a-9eeb-9e506f49597e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.009360 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.009395 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8sjf\" (UniqueName: \"kubernetes.io/projected/558a195a-5deb-441a-9eeb-9e506f49597e-kube-api-access-q8sjf\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.009406 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.417918 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4b8dd73-ff4d-44d3-b30f-a994e993392d","Type":"ContainerDied","Data":"ec024b4a882b8b962648e5e1cddea01209414bd2598d2c9c73886bd738d4ea3d"} Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.417979 4985 scope.go:117] "RemoveContainer" containerID="926ee0d9744c84d616cdd1efef14930926916bccab52a9fc5bcb156c80c24d29" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.418180 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.425841 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"558a195a-5deb-441a-9eeb-9e506f49597e","Type":"ContainerDied","Data":"85458b6f5d810a7b499082f7190c9ac8b481800a9c019fc526f3a7b1b018b583"} Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.425989 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.431992 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerStarted","Data":"ce00adc004811ac9876895749ff5243ac88f3112b42fc43a6710153984d18f01"} Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.492856 4985 scope.go:117] "RemoveContainer" containerID="fb245cebe475dc743941a7a591f70b9acf915655a7047e5c0f3798d225e1d296" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.518540 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.562169 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.590901 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: E0128 18:40:38.591554 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b8dd73-ff4d-44d3-b30f-a994e993392d" containerName="kube-state-metrics" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.591580 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b8dd73-ff4d-44d3-b30f-a994e993392d" containerName="kube-state-metrics" Jan 28 18:40:38 crc kubenswrapper[4985]: E0128 18:40:38.591633 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558a195a-5deb-441a-9eeb-9e506f49597e" containerName="mysqld-exporter" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.591640 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="558a195a-5deb-441a-9eeb-9e506f49597e" containerName="mysqld-exporter" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.591858 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b8dd73-ff4d-44d3-b30f-a994e993392d" containerName="kube-state-metrics" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.591881 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="558a195a-5deb-441a-9eeb-9e506f49597e" containerName="mysqld-exporter" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.592964 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.596098 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.596475 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.617545 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.652761 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.679939 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.692163 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.694188 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.698858 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.698914 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.714981 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.749677 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbswb\" (UniqueName: \"kubernetes.io/projected/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-api-access-gbswb\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.749744 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.750014 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.750094 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.852744 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r82rc\" (UniqueName: \"kubernetes.io/projected/6b1f6dd4-6d66-4f40-879f-5f0af3845842-kube-api-access-r82rc\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.852828 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbswb\" (UniqueName: \"kubernetes.io/projected/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-api-access-gbswb\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.852973 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.853287 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.853379 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-config-data\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.853435 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.853520 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.853588 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.858596 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.871138 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.871777 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.875995 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbswb\" (UniqueName: \"kubernetes.io/projected/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-api-access-gbswb\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.956145 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.956206 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-config-data\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.956283 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.956362 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r82rc\" (UniqueName: \"kubernetes.io/projected/6b1f6dd4-6d66-4f40-879f-5f0af3845842-kube-api-access-r82rc\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.961338 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.965046 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-config-data\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.974703 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.979345 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r82rc\" (UniqueName: \"kubernetes.io/projected/6b1f6dd4-6d66-4f40-879f-5f0af3845842-kube-api-access-r82rc\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.996751 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:40:39 crc kubenswrapper[4985]: I0128 18:40:39.014511 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 28 18:40:39 crc kubenswrapper[4985]: I0128 18:40:39.156833 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:39 crc kubenswrapper[4985]: I0128 18:40:39.264356 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:40:39 crc kubenswrapper[4985]: E0128 18:40:39.264599 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:40:39 crc kubenswrapper[4985]: I0128 18:40:39.283725 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="558a195a-5deb-441a-9eeb-9e506f49597e" path="/var/lib/kubelet/pods/558a195a-5deb-441a-9eeb-9e506f49597e/volumes" Jan 28 18:40:39 crc kubenswrapper[4985]: I0128 18:40:39.327315 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b8dd73-ff4d-44d3-b30f-a994e993392d" path="/var/lib/kubelet/pods/b4b8dd73-ff4d-44d3-b30f-a994e993392d/volumes" Jan 28 18:40:39 crc kubenswrapper[4985]: I0128 18:40:39.452930 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerStarted","Data":"62c497ce8a32d9934318c17ed91d43a5f2b55f59dcf450233639cd2285d0f2a2"} Jan 28 18:40:40 crc kubenswrapper[4985]: W0128 18:40:40.092306 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e6eb1bd_1379_4be2_bcb0_6d7a37e93e9e.slice/crio-af5ba1e93278410187fd69c8fa837aeaecc5cffabce8a2786e1f6dcdecdc625f WatchSource:0}: Error finding container af5ba1e93278410187fd69c8fa837aeaecc5cffabce8a2786e1f6dcdecdc625f: Status 404 returned error can't find the container with id af5ba1e93278410187fd69c8fa837aeaecc5cffabce8a2786e1f6dcdecdc625f Jan 28 18:40:40 crc kubenswrapper[4985]: W0128 18:40:40.096434 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b1f6dd4_6d66_4f40_879f_5f0af3845842.slice/crio-e7a193405b4304741a718f4d37c1ff7fe232fa8f41840fee8539d24d7a9c9e08 WatchSource:0}: Error finding container e7a193405b4304741a718f4d37c1ff7fe232fa8f41840fee8539d24d7a9c9e08: Status 404 returned error can't find the container with id e7a193405b4304741a718f4d37c1ff7fe232fa8f41840fee8539d24d7a9c9e08 Jan 28 18:40:40 crc kubenswrapper[4985]: I0128 18:40:40.113524 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:40:40 crc kubenswrapper[4985]: I0128 18:40:40.126881 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:40:40 crc kubenswrapper[4985]: I0128 18:40:40.473433 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e","Type":"ContainerStarted","Data":"af5ba1e93278410187fd69c8fa837aeaecc5cffabce8a2786e1f6dcdecdc625f"} Jan 28 18:40:40 crc kubenswrapper[4985]: I0128 18:40:40.481673 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"6b1f6dd4-6d66-4f40-879f-5f0af3845842","Type":"ContainerStarted","Data":"e7a193405b4304741a718f4d37c1ff7fe232fa8f41840fee8539d24d7a9c9e08"} Jan 28 18:40:40 crc kubenswrapper[4985]: I0128 18:40:40.497592 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerStarted","Data":"c96c826eaeb96bb76e151ca4f0d78c7aedd46ac1aa31c55f5960d944997cc2fd"} Jan 28 18:40:41 crc kubenswrapper[4985]: I0128 18:40:41.510579 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerStarted","Data":"116b4a8f5e3104f46338144e21ea08411d9e0947488b95acdc8fa986fd480e55"} Jan 28 18:40:41 crc kubenswrapper[4985]: I0128 18:40:41.510714 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-api" containerID="cri-o://cb1badf43fc5d99f4394e22eeadf7de3507d22dd49f7bc8d099cbb13b55d6eea" gracePeriod=30 Jan 28 18:40:41 crc kubenswrapper[4985]: I0128 18:40:41.510773 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-listener" containerID="cri-o://116b4a8f5e3104f46338144e21ea08411d9e0947488b95acdc8fa986fd480e55" gracePeriod=30 Jan 28 18:40:41 crc kubenswrapper[4985]: I0128 18:40:41.510812 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-evaluator" containerID="cri-o://5fe594e43016038bb82553490c959e421cf981ca7b939b3fb56693d76b19142d" gracePeriod=30 Jan 28 18:40:41 crc kubenswrapper[4985]: I0128 18:40:41.510837 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-notifier" containerID="cri-o://45ae2f94d58662256dd9e3846658d96a9b1c7b7c477db901916e216192ebd2f3" gracePeriod=30 Jan 28 18:40:41 crc kubenswrapper[4985]: I0128 18:40:41.533300 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.233797952 podStartE2EDuration="12.533281057s" podCreationTimestamp="2026-01-28 18:40:29 +0000 UTC" firstStartedPulling="2026-01-28 18:40:30.290438566 +0000 UTC m=+1641.117001387" lastFinishedPulling="2026-01-28 18:40:39.589921671 +0000 UTC m=+1650.416484492" observedRunningTime="2026-01-28 18:40:41.532348761 +0000 UTC m=+1652.358911592" watchObservedRunningTime="2026-01-28 18:40:41.533281057 +0000 UTC m=+1652.359843878" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.534859 4985 generic.go:334] "Generic (PLEG): container finished" podID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerID="5fe594e43016038bb82553490c959e421cf981ca7b939b3fb56693d76b19142d" exitCode=0 Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.535147 4985 generic.go:334] "Generic (PLEG): container finished" podID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerID="cb1badf43fc5d99f4394e22eeadf7de3507d22dd49f7bc8d099cbb13b55d6eea" exitCode=0 Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.534957 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerDied","Data":"5fe594e43016038bb82553490c959e421cf981ca7b939b3fb56693d76b19142d"} Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.535280 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerDied","Data":"cb1badf43fc5d99f4394e22eeadf7de3507d22dd49f7bc8d099cbb13b55d6eea"} Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.538696 4985 generic.go:334] "Generic (PLEG): container finished" podID="adbc3193-99ed-4a75-848b-6b98dfef1d3a" containerID="8e55d982fad1ab9461d4987775a77b35c6b3f7d058a5f2ff32d12ef2930dd72e" exitCode=137 Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.538741 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"adbc3193-99ed-4a75-848b-6b98dfef1d3a","Type":"ContainerDied","Data":"8e55d982fad1ab9461d4987775a77b35c6b3f7d058a5f2ff32d12ef2930dd72e"} Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.553850 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.558352 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.561561 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.638997 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.780992 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-combined-ca-bundle\") pod \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.781341 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-config-data\") pod \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.781680 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkmsv\" (UniqueName: \"kubernetes.io/projected/adbc3193-99ed-4a75-848b-6b98dfef1d3a-kube-api-access-vkmsv\") pod \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.786896 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adbc3193-99ed-4a75-848b-6b98dfef1d3a-kube-api-access-vkmsv" (OuterVolumeSpecName: "kube-api-access-vkmsv") pod "adbc3193-99ed-4a75-848b-6b98dfef1d3a" (UID: "adbc3193-99ed-4a75-848b-6b98dfef1d3a"). InnerVolumeSpecName "kube-api-access-vkmsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.815488 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adbc3193-99ed-4a75-848b-6b98dfef1d3a" (UID: "adbc3193-99ed-4a75-848b-6b98dfef1d3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.816917 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-config-data" (OuterVolumeSpecName: "config-data") pod "adbc3193-99ed-4a75-848b-6b98dfef1d3a" (UID: "adbc3193-99ed-4a75-848b-6b98dfef1d3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.885055 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkmsv\" (UniqueName: \"kubernetes.io/projected/adbc3193-99ed-4a75-848b-6b98dfef1d3a-kube-api-access-vkmsv\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.885086 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.885095 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.552538 4985 generic.go:334] "Generic (PLEG): container finished" podID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerID="45ae2f94d58662256dd9e3846658d96a9b1c7b7c477db901916e216192ebd2f3" exitCode=0 Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.552613 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerDied","Data":"45ae2f94d58662256dd9e3846658d96a9b1c7b7c477db901916e216192ebd2f3"} Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.555825 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.556458 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"adbc3193-99ed-4a75-848b-6b98dfef1d3a","Type":"ContainerDied","Data":"d8cf9fb9c6cec17cb1a2721de6a0e35c45b968fbf964f4ce2fc3f3f714ea3e1d"} Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.556495 4985 scope.go:117] "RemoveContainer" containerID="8e55d982fad1ab9461d4987775a77b35c6b3f7d058a5f2ff32d12ef2930dd72e" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.573676 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.583875 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.599565 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.616193 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:43 crc kubenswrapper[4985]: E0128 18:40:43.616803 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adbc3193-99ed-4a75-848b-6b98dfef1d3a" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.616819 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="adbc3193-99ed-4a75-848b-6b98dfef1d3a" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.617088 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="adbc3193-99ed-4a75-848b-6b98dfef1d3a" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.618018 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.623806 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.624074 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.624346 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.640884 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.642150 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.656897 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.672380 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.673947 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.815036 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.815082 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.815178 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.815207 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.815306 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blw9r\" (UniqueName: \"kubernetes.io/projected/4e0bd087-7446-45b4-858b-7b514713d4fe-kube-api-access-blw9r\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.916723 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blw9r\" (UniqueName: \"kubernetes.io/projected/4e0bd087-7446-45b4-858b-7b514713d4fe-kube-api-access-blw9r\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.916906 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.916925 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.916954 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.916984 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.921387 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.921931 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.923880 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.924628 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.936594 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blw9r\" (UniqueName: \"kubernetes.io/projected/4e0bd087-7446-45b4-858b-7b514713d4fe-kube-api-access-blw9r\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.937189 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.568718 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerStarted","Data":"26219cb687355c4dac3bfd3a6d68d0e8525ff60342389f25724df8675c0e7704"} Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.569089 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.573782 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.758800 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.798223 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-mp4hr"] Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.800132 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.846903 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-mp4hr"] Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.944695 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz55w\" (UniqueName: \"kubernetes.io/projected/f33e23a8-5c59-41b1-9afe-00977f966724-kube-api-access-qz55w\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.945084 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.945170 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.945209 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.945323 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-config\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.945389 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.047402 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.048563 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz55w\" (UniqueName: \"kubernetes.io/projected/f33e23a8-5c59-41b1-9afe-00977f966724-kube-api-access-qz55w\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.048669 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.048822 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.048898 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.049030 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-config\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.048485 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.050029 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.050075 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.050174 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.050750 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-config\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.067633 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz55w\" (UniqueName: \"kubernetes.io/projected/f33e23a8-5c59-41b1-9afe-00977f966724-kube-api-access-qz55w\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.155951 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.287607 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adbc3193-99ed-4a75-848b-6b98dfef1d3a" path="/var/lib/kubelet/pods/adbc3193-99ed-4a75-848b-6b98dfef1d3a/volumes" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.581232 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e0bd087-7446-45b4-858b-7b514713d4fe","Type":"ContainerStarted","Data":"0ea31fa32ec22c0401b08dda3f024f7fef07811f5c62450a61dc039159d908ff"} Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.581480 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e0bd087-7446-45b4-858b-7b514713d4fe","Type":"ContainerStarted","Data":"62f5d763e031e1fd03aa24e0cb0496eb67ec3549061d27a4e24005f40fdf07c0"} Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.587191 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e","Type":"ContainerStarted","Data":"dc0252c56541e6e97a4f6129007afca9a4dd9402da5c84c55d3d31fd8c345908"} Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.587417 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.591997 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"6b1f6dd4-6d66-4f40-879f-5f0af3845842","Type":"ContainerStarted","Data":"38b3266549f39b090b2b6709a347b2040c589c8067c8e7ca7a4cc2de8aabc0c8"} Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.616386 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" podUID="12d4e4cf-9153-4a32-9155-f9d13a248a26" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.630636 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.6306146139999997 podStartE2EDuration="2.630614614s" podCreationTimestamp="2026-01-28 18:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:45.601579854 +0000 UTC m=+1656.428142675" watchObservedRunningTime="2026-01-28 18:40:45.630614614 +0000 UTC m=+1656.457177445" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.632737 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.413223742 podStartE2EDuration="7.632726993s" podCreationTimestamp="2026-01-28 18:40:38 +0000 UTC" firstStartedPulling="2026-01-28 18:40:40.094934899 +0000 UTC m=+1650.921497720" lastFinishedPulling="2026-01-28 18:40:42.31443815 +0000 UTC m=+1653.141000971" observedRunningTime="2026-01-28 18:40:45.625764097 +0000 UTC m=+1656.452326938" watchObservedRunningTime="2026-01-28 18:40:45.632726993 +0000 UTC m=+1656.459289814" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.645970 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.920427538 podStartE2EDuration="7.645947777s" podCreationTimestamp="2026-01-28 18:40:38 +0000 UTC" firstStartedPulling="2026-01-28 18:40:40.101128454 +0000 UTC m=+1650.927691265" lastFinishedPulling="2026-01-28 18:40:43.826648683 +0000 UTC m=+1654.653211504" observedRunningTime="2026-01-28 18:40:45.643269141 +0000 UTC m=+1656.469831972" watchObservedRunningTime="2026-01-28 18:40:45.645947777 +0000 UTC m=+1656.472510608" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.737980 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-mp4hr"] Jan 28 18:40:46 crc kubenswrapper[4985]: I0128 18:40:46.605855 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" event={"ID":"f33e23a8-5c59-41b1-9afe-00977f966724","Type":"ContainerStarted","Data":"8a81f5a6bc9aeb4779fe5ba3167c9da81f9d6b2cee2d0a3316b0a2d07b8f7a9e"} Jan 28 18:40:47 crc kubenswrapper[4985]: I0128 18:40:47.489202 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:47 crc kubenswrapper[4985]: I0128 18:40:47.630338 4985 generic.go:334] "Generic (PLEG): container finished" podID="f33e23a8-5c59-41b1-9afe-00977f966724" containerID="fd29c92499411247c46e32f0f3619427bf7f15dbc9ff2205fbac7905d817aa90" exitCode=0 Jan 28 18:40:47 crc kubenswrapper[4985]: I0128 18:40:47.630614 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-log" containerID="cri-o://6400694cb09a2eb35a99c8f2620bc42af5a434bb4e4c9f3a4165d20445332e54" gracePeriod=30 Jan 28 18:40:47 crc kubenswrapper[4985]: I0128 18:40:47.632091 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" event={"ID":"f33e23a8-5c59-41b1-9afe-00977f966724","Type":"ContainerDied","Data":"fd29c92499411247c46e32f0f3619427bf7f15dbc9ff2205fbac7905d817aa90"} Jan 28 18:40:47 crc kubenswrapper[4985]: I0128 18:40:47.632697 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-api" containerID="cri-o://5ddbcefbcd9d03f983d9329ae2dee80e9b1046c773fa3fc54838926cf067667d" gracePeriod=30 Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.647219 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" event={"ID":"f33e23a8-5c59-41b1-9afe-00977f966724","Type":"ContainerStarted","Data":"8dde278f7ddf86385d1f8ef9bd55566ee7c04f535897d358bb08d0218ee0c419"} Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.647798 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.651310 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerStarted","Data":"d31e92aa6b1d7376b4e96782143ab6de149e34e427162f5a9786c7802bc818a7"} Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.651441 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-central-agent" containerID="cri-o://62c497ce8a32d9934318c17ed91d43a5f2b55f59dcf450233639cd2285d0f2a2" gracePeriod=30 Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.651469 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.651476 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-notification-agent" containerID="cri-o://c96c826eaeb96bb76e151ca4f0d78c7aedd46ac1aa31c55f5960d944997cc2fd" gracePeriod=30 Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.651480 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="sg-core" containerID="cri-o://26219cb687355c4dac3bfd3a6d68d0e8525ff60342389f25724df8675c0e7704" gracePeriod=30 Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.651517 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="proxy-httpd" containerID="cri-o://d31e92aa6b1d7376b4e96782143ab6de149e34e427162f5a9786c7802bc818a7" gracePeriod=30 Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.655434 4985 generic.go:334] "Generic (PLEG): container finished" podID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerID="6400694cb09a2eb35a99c8f2620bc42af5a434bb4e4c9f3a4165d20445332e54" exitCode=143 Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.655487 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72cdf54b-14dd-4844-bb8c-b68794fba1b9","Type":"ContainerDied","Data":"6400694cb09a2eb35a99c8f2620bc42af5a434bb4e4c9f3a4165d20445332e54"} Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.673095 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" podStartSLOduration=4.67307818 podStartE2EDuration="4.67307818s" podCreationTimestamp="2026-01-28 18:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:48.66669361 +0000 UTC m=+1659.493256441" watchObservedRunningTime="2026-01-28 18:40:48.67307818 +0000 UTC m=+1659.499641001" Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.701102 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.815359995 podStartE2EDuration="12.701083211s" podCreationTimestamp="2026-01-28 18:40:36 +0000 UTC" firstStartedPulling="2026-01-28 18:40:37.698909684 +0000 UTC m=+1648.525472505" lastFinishedPulling="2026-01-28 18:40:47.5846329 +0000 UTC m=+1658.411195721" observedRunningTime="2026-01-28 18:40:48.693459345 +0000 UTC m=+1659.520022177" watchObservedRunningTime="2026-01-28 18:40:48.701083211 +0000 UTC m=+1659.527646032" Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.937854 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676560 4985 generic.go:334] "Generic (PLEG): container finished" podID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerID="d31e92aa6b1d7376b4e96782143ab6de149e34e427162f5a9786c7802bc818a7" exitCode=0 Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676874 4985 generic.go:334] "Generic (PLEG): container finished" podID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerID="26219cb687355c4dac3bfd3a6d68d0e8525ff60342389f25724df8675c0e7704" exitCode=2 Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676620 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerDied","Data":"d31e92aa6b1d7376b4e96782143ab6de149e34e427162f5a9786c7802bc818a7"} Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676925 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerDied","Data":"26219cb687355c4dac3bfd3a6d68d0e8525ff60342389f25724df8675c0e7704"} Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676939 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerDied","Data":"c96c826eaeb96bb76e151ca4f0d78c7aedd46ac1aa31c55f5960d944997cc2fd"} Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676886 4985 generic.go:334] "Generic (PLEG): container finished" podID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerID="c96c826eaeb96bb76e151ca4f0d78c7aedd46ac1aa31c55f5960d944997cc2fd" exitCode=0 Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676960 4985 generic.go:334] "Generic (PLEG): container finished" podID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerID="62c497ce8a32d9934318c17ed91d43a5f2b55f59dcf450233639cd2285d0f2a2" exitCode=0 Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.677324 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerDied","Data":"62c497ce8a32d9934318c17ed91d43a5f2b55f59dcf450233639cd2285d0f2a2"} Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.264162 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:40:50 crc kubenswrapper[4985]: E0128 18:40:50.264567 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.658170 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.693782 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerDied","Data":"ce00adc004811ac9876895749ff5243ac88f3112b42fc43a6710153984d18f01"} Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.693841 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.693865 4985 scope.go:117] "RemoveContainer" containerID="d31e92aa6b1d7376b4e96782143ab6de149e34e427162f5a9786c7802bc818a7" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.726996 4985 scope.go:117] "RemoveContainer" containerID="26219cb687355c4dac3bfd3a6d68d0e8525ff60342389f25724df8675c0e7704" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.750225 4985 scope.go:117] "RemoveContainer" containerID="c96c826eaeb96bb76e151ca4f0d78c7aedd46ac1aa31c55f5960d944997cc2fd" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.777048 4985 scope.go:117] "RemoveContainer" containerID="62c497ce8a32d9934318c17ed91d43a5f2b55f59dcf450233639cd2285d0f2a2" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.802928 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-sg-core-conf-yaml\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.803001 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-config-data\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.803085 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-scripts\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.803111 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-combined-ca-bundle\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.803316 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-log-httpd\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.803432 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-run-httpd\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.803476 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxpb5\" (UniqueName: \"kubernetes.io/projected/8480417c-9ea7-4d07-bcbd-7734e301a0c6-kube-api-access-gxpb5\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.805382 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.806460 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.810645 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-scripts" (OuterVolumeSpecName: "scripts") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.810985 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8480417c-9ea7-4d07-bcbd-7734e301a0c6-kube-api-access-gxpb5" (OuterVolumeSpecName: "kube-api-access-gxpb5") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "kube-api-access-gxpb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.847704 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.906577 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxpb5\" (UniqueName: \"kubernetes.io/projected/8480417c-9ea7-4d07-bcbd-7734e301a0c6-kube-api-access-gxpb5\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.906619 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.906634 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.906647 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.906658 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.907281 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.944875 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-config-data" (OuterVolumeSpecName: "config-data") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.008506 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.008541 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.032794 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.050567 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.066069 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:51 crc kubenswrapper[4985]: E0128 18:40:51.066659 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="proxy-httpd" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.066683 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="proxy-httpd" Jan 28 18:40:51 crc kubenswrapper[4985]: E0128 18:40:51.066703 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-notification-agent" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.066711 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-notification-agent" Jan 28 18:40:51 crc kubenswrapper[4985]: E0128 18:40:51.066724 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="sg-core" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.066731 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="sg-core" Jan 28 18:40:51 crc kubenswrapper[4985]: E0128 18:40:51.066758 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-central-agent" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.066765 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-central-agent" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.067057 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-notification-agent" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.067078 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="proxy-httpd" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.067101 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="sg-core" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.067124 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-central-agent" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.069730 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.078852 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.080752 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.080932 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.103785 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.213704 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.213783 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-run-httpd\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.213838 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-log-httpd\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.213892 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.214015 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.214149 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-scripts\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.214306 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-config-data\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.214345 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vnll\" (UniqueName: \"kubernetes.io/projected/9079aa62-2b93-4559-bff4-af80b69e23a7-kube-api-access-5vnll\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.283871 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" path="/var/lib/kubelet/pods/8480417c-9ea7-4d07-bcbd-7734e301a0c6/volumes" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.284792 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sn5lq"] Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.287323 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.292156 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sn5lq"] Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316529 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-scripts\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316671 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-config-data\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316712 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vnll\" (UniqueName: \"kubernetes.io/projected/9079aa62-2b93-4559-bff4-af80b69e23a7-kube-api-access-5vnll\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316820 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316871 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-run-httpd\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316907 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-log-httpd\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316947 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.317609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-log-httpd\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.317875 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-run-httpd\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.322347 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-scripts\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.323077 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.323948 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.325880 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-config-data\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.325895 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.361708 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vnll\" (UniqueName: \"kubernetes.io/projected/9079aa62-2b93-4559-bff4-af80b69e23a7-kube-api-access-5vnll\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.419305 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-catalog-content\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.419410 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-utilities\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.419637 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blw8m\" (UniqueName: \"kubernetes.io/projected/fe3dd10e-5081-4256-9c08-e2be3557bf65-kube-api-access-blw8m\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.460119 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.521938 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blw8m\" (UniqueName: \"kubernetes.io/projected/fe3dd10e-5081-4256-9c08-e2be3557bf65-kube-api-access-blw8m\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.522675 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-catalog-content\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.522807 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-utilities\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.523350 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-catalog-content\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.523411 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-utilities\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.546345 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blw8m\" (UniqueName: \"kubernetes.io/projected/fe3dd10e-5081-4256-9c08-e2be3557bf65-kube-api-access-blw8m\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.614747 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.712870 4985 generic.go:334] "Generic (PLEG): container finished" podID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerID="5ddbcefbcd9d03f983d9329ae2dee80e9b1046c773fa3fc54838926cf067667d" exitCode=0 Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.712922 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72cdf54b-14dd-4844-bb8c-b68794fba1b9","Type":"ContainerDied","Data":"5ddbcefbcd9d03f983d9329ae2dee80e9b1046c773fa3fc54838926cf067667d"} Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.984204 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.185574 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.251557 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cdf54b-14dd-4844-bb8c-b68794fba1b9-logs\") pod \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.251631 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clxv7\" (UniqueName: \"kubernetes.io/projected/72cdf54b-14dd-4844-bb8c-b68794fba1b9-kube-api-access-clxv7\") pod \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.251887 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-combined-ca-bundle\") pod \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.251958 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-config-data\") pod \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.252450 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72cdf54b-14dd-4844-bb8c-b68794fba1b9-logs" (OuterVolumeSpecName: "logs") pod "72cdf54b-14dd-4844-bb8c-b68794fba1b9" (UID: "72cdf54b-14dd-4844-bb8c-b68794fba1b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.252921 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cdf54b-14dd-4844-bb8c-b68794fba1b9-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.257647 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72cdf54b-14dd-4844-bb8c-b68794fba1b9-kube-api-access-clxv7" (OuterVolumeSpecName: "kube-api-access-clxv7") pod "72cdf54b-14dd-4844-bb8c-b68794fba1b9" (UID: "72cdf54b-14dd-4844-bb8c-b68794fba1b9"). InnerVolumeSpecName "kube-api-access-clxv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.296926 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72cdf54b-14dd-4844-bb8c-b68794fba1b9" (UID: "72cdf54b-14dd-4844-bb8c-b68794fba1b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.304868 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-config-data" (OuterVolumeSpecName: "config-data") pod "72cdf54b-14dd-4844-bb8c-b68794fba1b9" (UID: "72cdf54b-14dd-4844-bb8c-b68794fba1b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:52 crc kubenswrapper[4985]: W0128 18:40:52.318910 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe3dd10e_5081_4256_9c08_e2be3557bf65.slice/crio-1118f0c768bed110a3a9b05d6637f78ab5e21ee7e674a7222c90a1b7f83294fd WatchSource:0}: Error finding container 1118f0c768bed110a3a9b05d6637f78ab5e21ee7e674a7222c90a1b7f83294fd: Status 404 returned error can't find the container with id 1118f0c768bed110a3a9b05d6637f78ab5e21ee7e674a7222c90a1b7f83294fd Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.351330 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sn5lq"] Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.356700 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.356734 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.356744 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clxv7\" (UniqueName: \"kubernetes.io/projected/72cdf54b-14dd-4844-bb8c-b68794fba1b9-kube-api-access-clxv7\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.729671 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.729653 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72cdf54b-14dd-4844-bb8c-b68794fba1b9","Type":"ContainerDied","Data":"afeb7e343ebc16ce5060f2783d896f767c20813419a24762ce1683493a801f47"} Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.730163 4985 scope.go:117] "RemoveContainer" containerID="5ddbcefbcd9d03f983d9329ae2dee80e9b1046c773fa3fc54838926cf067667d" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.733447 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerStarted","Data":"99f6a59231cb74972d7065e16a91981feb750820d3a47ac21d46c1a8419a7fb5"} Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.735698 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerID="83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724" exitCode=0 Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.735732 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerDied","Data":"83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724"} Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.735751 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerStarted","Data":"1118f0c768bed110a3a9b05d6637f78ab5e21ee7e674a7222c90a1b7f83294fd"} Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.760196 4985 scope.go:117] "RemoveContainer" containerID="6400694cb09a2eb35a99c8f2620bc42af5a434bb4e4c9f3a4165d20445332e54" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.815424 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.836231 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.850450 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:52 crc kubenswrapper[4985]: E0128 18:40:52.851089 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-log" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.851113 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-log" Jan 28 18:40:52 crc kubenswrapper[4985]: E0128 18:40:52.851168 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-api" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.851177 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-api" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.851501 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-log" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.851534 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-api" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.853210 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.862475 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.864509 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.865599 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.864581 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.972056 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-public-tls-certs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.972160 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.972300 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.972333 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7258e3aa-2eb9-4bc7-a143-76946c12b889-logs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.972378 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-config-data\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.972427 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jb8n\" (UniqueName: \"kubernetes.io/projected/7258e3aa-2eb9-4bc7-a143-76946c12b889-kube-api-access-5jb8n\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.074469 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.074568 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.074601 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7258e3aa-2eb9-4bc7-a143-76946c12b889-logs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.074650 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-config-data\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.074707 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jb8n\" (UniqueName: \"kubernetes.io/projected/7258e3aa-2eb9-4bc7-a143-76946c12b889-kube-api-access-5jb8n\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.074868 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-public-tls-certs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.075421 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7258e3aa-2eb9-4bc7-a143-76946c12b889-logs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.080793 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-public-tls-certs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.084703 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.084926 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-config-data\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.085590 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.094561 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jb8n\" (UniqueName: \"kubernetes.io/projected/7258e3aa-2eb9-4bc7-a143-76946c12b889-kube-api-access-5jb8n\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.286440 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" path="/var/lib/kubelet/pods/72cdf54b-14dd-4844-bb8c-b68794fba1b9/volumes" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.307483 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.750542 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerStarted","Data":"b945ecd85cb2d6c7bb07e875ec3e1e57a0f59ee2eb03cf09cfc003be7f2c0ad0"} Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.773039 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.937676 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.958727 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.773383 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7258e3aa-2eb9-4bc7-a143-76946c12b889","Type":"ContainerStarted","Data":"1a251a8091ad2d86f44bec193d866720c1dfdcafe9383258c1b57b5edba7d8dc"} Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.773785 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7258e3aa-2eb9-4bc7-a143-76946c12b889","Type":"ContainerStarted","Data":"c90565f788cfb36cdadf74a3373459a040e9f918b36e0c76ca75c9290bca74e9"} Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.793602 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.980307 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-559zx"] Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.982302 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.984832 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.984885 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.993731 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-559zx"] Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.030519 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8nl7\" (UniqueName: \"kubernetes.io/projected/aabefa44-123b-48ce-a38b-8c5f6ed32b73-kube-api-access-j8nl7\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.030646 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.030850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-scripts\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.031076 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-config-data\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.133433 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-config-data\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.133544 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8nl7\" (UniqueName: \"kubernetes.io/projected/aabefa44-123b-48ce-a38b-8c5f6ed32b73-kube-api-access-j8nl7\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.133615 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.133799 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-scripts\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.138960 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-scripts\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.139988 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.140896 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-config-data\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.154200 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8nl7\" (UniqueName: \"kubernetes.io/projected/aabefa44-123b-48ce-a38b-8c5f6ed32b73-kube-api-access-j8nl7\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.158416 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.286968 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hjzhw"] Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.287188 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerName="dnsmasq-dns" containerID="cri-o://4fa8b90db22baa4c4faa4968579997174ae718c0a3c0ae7654d27d51dc441aa9" gracePeriod=10 Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.347877 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:56 crc kubenswrapper[4985]: I0128 18:40:56.796525 4985 generic.go:334] "Generic (PLEG): container finished" podID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerID="4fa8b90db22baa4c4faa4968579997174ae718c0a3c0ae7654d27d51dc441aa9" exitCode=0 Jan 28 18:40:56 crc kubenswrapper[4985]: I0128 18:40:56.796626 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" event={"ID":"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0","Type":"ContainerDied","Data":"4fa8b90db22baa4c4faa4968579997174ae718c0a3c0ae7654d27d51dc441aa9"} Jan 28 18:40:56 crc kubenswrapper[4985]: I0128 18:40:56.799123 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7258e3aa-2eb9-4bc7-a143-76946c12b889","Type":"ContainerStarted","Data":"091866b67d722b85f66b348b87fcb2e2785f91d8fccccba9f3e2b09885d4aade"} Jan 28 18:40:56 crc kubenswrapper[4985]: I0128 18:40:56.829908 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.829883755 podStartE2EDuration="4.829883755s" podCreationTimestamp="2026-01-28 18:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:56.824449521 +0000 UTC m=+1667.651012352" watchObservedRunningTime="2026-01-28 18:40:56.829883755 +0000 UTC m=+1667.656446586" Jan 28 18:40:57 crc kubenswrapper[4985]: I0128 18:40:57.815401 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" event={"ID":"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0","Type":"ContainerDied","Data":"b12e09f6a40d1423b050a43aba39f7da27aac982d0fc418cb95ef0f8e230e6e1"} Jan 28 18:40:57 crc kubenswrapper[4985]: I0128 18:40:57.815989 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b12e09f6a40d1423b050a43aba39f7da27aac982d0fc418cb95ef0f8e230e6e1" Jan 28 18:40:57 crc kubenswrapper[4985]: I0128 18:40:57.916596 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.010198 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-swift-storage-0\") pod \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.010281 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-svc\") pod \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.010326 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d694m\" (UniqueName: \"kubernetes.io/projected/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-kube-api-access-d694m\") pod \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.010586 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-sb\") pod \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.010666 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-nb\") pod \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.010748 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-config\") pod \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.016087 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-kube-api-access-d694m" (OuterVolumeSpecName: "kube-api-access-d694m") pod "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" (UID: "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0"). InnerVolumeSpecName "kube-api-access-d694m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.076309 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" (UID: "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.087097 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" (UID: "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.088511 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" (UID: "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.103143 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-config" (OuterVolumeSpecName: "config") pod "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" (UID: "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.110478 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" (UID: "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.114434 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.114606 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.114677 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.114740 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.114839 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.114963 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d694m\" (UniqueName: \"kubernetes.io/projected/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-kube-api-access-d694m\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.825625 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.015910 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hjzhw"] Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.022714 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.035512 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hjzhw"] Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.277458 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" path="/var/lib/kubelet/pods/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0/volumes" Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.332738 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-559zx"] Jan 28 18:40:59 crc kubenswrapper[4985]: W0128 18:40:59.334595 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaabefa44_123b_48ce_a38b_8c5f6ed32b73.slice/crio-870660ec8bc3c0314dc1037dd620996db52b2a1745a86024589f75d20c716067 WatchSource:0}: Error finding container 870660ec8bc3c0314dc1037dd620996db52b2a1745a86024589f75d20c716067: Status 404 returned error can't find the container with id 870660ec8bc3c0314dc1037dd620996db52b2a1745a86024589f75d20c716067 Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.841570 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-559zx" event={"ID":"aabefa44-123b-48ce-a38b-8c5f6ed32b73","Type":"ContainerStarted","Data":"870660ec8bc3c0314dc1037dd620996db52b2a1745a86024589f75d20c716067"} Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.846002 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerStarted","Data":"2abc407d0b012d9d9eec8a48e74a309321192094aaee78b70f6990073a7856e0"} Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.849611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerStarted","Data":"e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136"} Jan 28 18:41:00 crc kubenswrapper[4985]: I0128 18:41:00.861121 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-559zx" event={"ID":"aabefa44-123b-48ce-a38b-8c5f6ed32b73","Type":"ContainerStarted","Data":"db5c8f620d59499400c9788d3b5dfb76a365065e272b490b2eae142e49cd78fa"} Jan 28 18:41:00 crc kubenswrapper[4985]: I0128 18:41:00.885589 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-559zx" podStartSLOduration=6.885565426 podStartE2EDuration="6.885565426s" podCreationTimestamp="2026-01-28 18:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:41:00.881951084 +0000 UTC m=+1671.708513925" watchObservedRunningTime="2026-01-28 18:41:00.885565426 +0000 UTC m=+1671.712128287" Jan 28 18:41:02 crc kubenswrapper[4985]: I0128 18:41:02.893083 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerStarted","Data":"4649bed0f7e2d88fd12f9c7284945a04a799e7c6515875078e092e9a5114b1ba"} Jan 28 18:41:03 crc kubenswrapper[4985]: I0128 18:41:03.308518 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:41:03 crc kubenswrapper[4985]: I0128 18:41:03.308584 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:41:04 crc kubenswrapper[4985]: I0128 18:41:04.264958 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:41:04 crc kubenswrapper[4985]: E0128 18:41:04.265709 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:41:04 crc kubenswrapper[4985]: I0128 18:41:04.322471 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:41:04 crc kubenswrapper[4985]: I0128 18:41:04.322471 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.4:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:41:04 crc kubenswrapper[4985]: I0128 18:41:04.924179 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerID="e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136" exitCode=0 Jan 28 18:41:04 crc kubenswrapper[4985]: I0128 18:41:04.924238 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerDied","Data":"e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136"} Jan 28 18:41:05 crc kubenswrapper[4985]: I0128 18:41:05.944003 4985 generic.go:334] "Generic (PLEG): container finished" podID="aabefa44-123b-48ce-a38b-8c5f6ed32b73" containerID="db5c8f620d59499400c9788d3b5dfb76a365065e272b490b2eae142e49cd78fa" exitCode=0 Jan 28 18:41:05 crc kubenswrapper[4985]: I0128 18:41:05.944358 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-559zx" event={"ID":"aabefa44-123b-48ce-a38b-8c5f6ed32b73","Type":"ContainerDied","Data":"db5c8f620d59499400c9788d3b5dfb76a365065e272b490b2eae142e49cd78fa"} Jan 28 18:41:06 crc kubenswrapper[4985]: I0128 18:41:06.961925 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerStarted","Data":"fece39157ded0ea37a252872cc2390f006a1bb017033fdc56f58780de2bd7236"} Jan 28 18:41:06 crc kubenswrapper[4985]: I0128 18:41:06.963455 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:41:06 crc kubenswrapper[4985]: I0128 18:41:06.967116 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerStarted","Data":"a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849"} Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.016655 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.167945581 podStartE2EDuration="16.016625531s" podCreationTimestamp="2026-01-28 18:40:51 +0000 UTC" firstStartedPulling="2026-01-28 18:40:52.003461174 +0000 UTC m=+1662.830023995" lastFinishedPulling="2026-01-28 18:41:05.852141124 +0000 UTC m=+1676.678703945" observedRunningTime="2026-01-28 18:41:06.993861778 +0000 UTC m=+1677.820424609" watchObservedRunningTime="2026-01-28 18:41:07.016625531 +0000 UTC m=+1677.843188392" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.023093 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sn5lq" podStartSLOduration=2.813192667 podStartE2EDuration="16.023069103s" podCreationTimestamp="2026-01-28 18:40:51 +0000 UTC" firstStartedPulling="2026-01-28 18:40:52.760435855 +0000 UTC m=+1663.586998676" lastFinishedPulling="2026-01-28 18:41:05.970312251 +0000 UTC m=+1676.796875112" observedRunningTime="2026-01-28 18:41:07.01765409 +0000 UTC m=+1677.844216921" watchObservedRunningTime="2026-01-28 18:41:07.023069103 +0000 UTC m=+1677.849631924" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.458601 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.584101 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-config-data\") pod \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.584201 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8nl7\" (UniqueName: \"kubernetes.io/projected/aabefa44-123b-48ce-a38b-8c5f6ed32b73-kube-api-access-j8nl7\") pod \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.584603 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-combined-ca-bundle\") pod \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.584674 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-scripts\") pod \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.590318 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-scripts" (OuterVolumeSpecName: "scripts") pod "aabefa44-123b-48ce-a38b-8c5f6ed32b73" (UID: "aabefa44-123b-48ce-a38b-8c5f6ed32b73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.590509 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabefa44-123b-48ce-a38b-8c5f6ed32b73-kube-api-access-j8nl7" (OuterVolumeSpecName: "kube-api-access-j8nl7") pod "aabefa44-123b-48ce-a38b-8c5f6ed32b73" (UID: "aabefa44-123b-48ce-a38b-8c5f6ed32b73"). InnerVolumeSpecName "kube-api-access-j8nl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.619875 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aabefa44-123b-48ce-a38b-8c5f6ed32b73" (UID: "aabefa44-123b-48ce-a38b-8c5f6ed32b73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.647081 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-config-data" (OuterVolumeSpecName: "config-data") pod "aabefa44-123b-48ce-a38b-8c5f6ed32b73" (UID: "aabefa44-123b-48ce-a38b-8c5f6ed32b73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.687560 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.687596 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.687609 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8nl7\" (UniqueName: \"kubernetes.io/projected/aabefa44-123b-48ce-a38b-8c5f6ed32b73-kube-api-access-j8nl7\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.687623 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.981742 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-559zx" event={"ID":"aabefa44-123b-48ce-a38b-8c5f6ed32b73","Type":"ContainerDied","Data":"870660ec8bc3c0314dc1037dd620996db52b2a1745a86024589f75d20c716067"} Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.981811 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870660ec8bc3c0314dc1037dd620996db52b2a1745a86024589f75d20c716067" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.981772 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.163363 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.163763 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-log" containerID="cri-o://1a251a8091ad2d86f44bec193d866720c1dfdcafe9383258c1b57b5edba7d8dc" gracePeriod=30 Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.163902 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-api" containerID="cri-o://091866b67d722b85f66b348b87fcb2e2785f91d8fccccba9f3e2b09885d4aade" gracePeriod=30 Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.207388 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.208179 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" containerName="nova-scheduler-scheduler" containerID="cri-o://047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2" gracePeriod=30 Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.221978 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.222322 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-log" containerID="cri-o://dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937" gracePeriod=30 Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.222465 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-metadata" containerID="cri-o://a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9" gracePeriod=30 Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.996858 4985 generic.go:334] "Generic (PLEG): container finished" podID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerID="1a251a8091ad2d86f44bec193d866720c1dfdcafe9383258c1b57b5edba7d8dc" exitCode=143 Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.997170 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7258e3aa-2eb9-4bc7-a143-76946c12b889","Type":"ContainerDied","Data":"1a251a8091ad2d86f44bec193d866720c1dfdcafe9383258c1b57b5edba7d8dc"} Jan 28 18:41:09 crc kubenswrapper[4985]: I0128 18:41:09.002402 4985 generic.go:334] "Generic (PLEG): container finished" podID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerID="dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937" exitCode=143 Jan 28 18:41:09 crc kubenswrapper[4985]: I0128 18:41:09.003679 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa1f962-f78d-41dc-a567-7c749f53ce57","Type":"ContainerDied","Data":"dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937"} Jan 28 18:41:11 crc kubenswrapper[4985]: I0128 18:41:11.619741 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:41:11 crc kubenswrapper[4985]: I0128 18:41:11.620098 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.034769 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.086201 4985 generic.go:334] "Generic (PLEG): container finished" podID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerID="116b4a8f5e3104f46338144e21ea08411d9e0947488b95acdc8fa986fd480e55" exitCode=137 Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.086241 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerDied","Data":"116b4a8f5e3104f46338144e21ea08411d9e0947488b95acdc8fa986fd480e55"} Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.092267 4985 generic.go:334] "Generic (PLEG): container finished" podID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerID="091866b67d722b85f66b348b87fcb2e2785f91d8fccccba9f3e2b09885d4aade" exitCode=0 Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.092376 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7258e3aa-2eb9-4bc7-a143-76946c12b889","Type":"ContainerDied","Data":"091866b67d722b85f66b348b87fcb2e2785f91d8fccccba9f3e2b09885d4aade"} Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.097907 4985 generic.go:334] "Generic (PLEG): container finished" podID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerID="a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9" exitCode=0 Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.097968 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa1f962-f78d-41dc-a567-7c749f53ce57","Type":"ContainerDied","Data":"a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9"} Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.097993 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa1f962-f78d-41dc-a567-7c749f53ce57","Type":"ContainerDied","Data":"beb681875d1b031fab542c0f8d59f502b25e7da8eb5f0f02c317251a2c3309d0"} Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.098002 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.098011 4985 scope.go:117] "RemoveContainer" containerID="a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.109415 4985 generic.go:334] "Generic (PLEG): container finished" podID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" containerID="047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2" exitCode=0 Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.109450 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"938ef95c-9a4f-4f1e-b92c-8c16f0043102","Type":"ContainerDied","Data":"047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2"} Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.109979 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.150583 4985 scope.go:117] "RemoveContainer" containerID="dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.195967 4985 scope.go:117] "RemoveContainer" containerID="a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.196844 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9\": container with ID starting with a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9 not found: ID does not exist" containerID="a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.196920 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9"} err="failed to get container status \"a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9\": rpc error: code = NotFound desc = could not find container \"a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9\": container with ID starting with a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9 not found: ID does not exist" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.196947 4985 scope.go:117] "RemoveContainer" containerID="dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.197458 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937\": container with ID starting with dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937 not found: ID does not exist" containerID="dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.197891 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937"} err="failed to get container status \"dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937\": rpc error: code = NotFound desc = could not find container \"dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937\": container with ID starting with dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937 not found: ID does not exist" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211110 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-config-data\") pod \"9aa1f962-f78d-41dc-a567-7c749f53ce57\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211172 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-combined-ca-bundle\") pod \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211474 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aa1f962-f78d-41dc-a567-7c749f53ce57-logs\") pod \"9aa1f962-f78d-41dc-a567-7c749f53ce57\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211543 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-combined-ca-bundle\") pod \"9aa1f962-f78d-41dc-a567-7c749f53ce57\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211579 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r4dz\" (UniqueName: \"kubernetes.io/projected/9aa1f962-f78d-41dc-a567-7c749f53ce57-kube-api-access-2r4dz\") pod \"9aa1f962-f78d-41dc-a567-7c749f53ce57\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211663 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-scripts\") pod \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211854 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-nova-metadata-tls-certs\") pod \"9aa1f962-f78d-41dc-a567-7c749f53ce57\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211889 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-config-data\") pod \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211950 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c598\" (UniqueName: \"kubernetes.io/projected/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-kube-api-access-2c598\") pod \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.219331 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-kube-api-access-2c598" (OuterVolumeSpecName: "kube-api-access-2c598") pod "1901b8df-d418-45ea-8d73-c6ffbf3a0da5" (UID: "1901b8df-d418-45ea-8d73-c6ffbf3a0da5"). InnerVolumeSpecName "kube-api-access-2c598". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.226190 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa1f962-f78d-41dc-a567-7c749f53ce57-kube-api-access-2r4dz" (OuterVolumeSpecName: "kube-api-access-2r4dz") pod "9aa1f962-f78d-41dc-a567-7c749f53ce57" (UID: "9aa1f962-f78d-41dc-a567-7c749f53ce57"). InnerVolumeSpecName "kube-api-access-2r4dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.227008 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-scripts" (OuterVolumeSpecName: "scripts") pod "1901b8df-d418-45ea-8d73-c6ffbf3a0da5" (UID: "1901b8df-d418-45ea-8d73-c6ffbf3a0da5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.231838 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aa1f962-f78d-41dc-a567-7c749f53ce57-logs" (OuterVolumeSpecName: "logs") pod "9aa1f962-f78d-41dc-a567-7c749f53ce57" (UID: "9aa1f962-f78d-41dc-a567-7c749f53ce57"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.271772 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-config-data" (OuterVolumeSpecName: "config-data") pod "9aa1f962-f78d-41dc-a567-7c749f53ce57" (UID: "9aa1f962-f78d-41dc-a567-7c749f53ce57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.295527 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9aa1f962-f78d-41dc-a567-7c749f53ce57" (UID: "9aa1f962-f78d-41dc-a567-7c749f53ce57"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.297054 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9aa1f962-f78d-41dc-a567-7c749f53ce57" (UID: "9aa1f962-f78d-41dc-a567-7c749f53ce57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315710 4985 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315746 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c598\" (UniqueName: \"kubernetes.io/projected/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-kube-api-access-2c598\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315756 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315766 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aa1f962-f78d-41dc-a567-7c749f53ce57-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315773 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315782 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r4dz\" (UniqueName: \"kubernetes.io/projected/9aa1f962-f78d-41dc-a567-7c749f53ce57-kube-api-access-2r4dz\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315790 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.364435 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.405586 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1901b8df-d418-45ea-8d73-c6ffbf3a0da5" (UID: "1901b8df-d418-45ea-8d73-c6ffbf3a0da5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.408331 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-config-data" (OuterVolumeSpecName: "config-data") pod "1901b8df-d418-45ea-8d73-c6ffbf3a0da5" (UID: "1901b8df-d418-45ea-8d73-c6ffbf3a0da5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.417971 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.418005 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.475707 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2 is running failed: container process not found" containerID="047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.476528 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2 is running failed: container process not found" containerID="047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.477881 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2 is running failed: container process not found" containerID="047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.477917 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" containerName="nova-scheduler-scheduler" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.519171 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-config-data\") pod \"7258e3aa-2eb9-4bc7-a143-76946c12b889\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.519359 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-internal-tls-certs\") pod \"7258e3aa-2eb9-4bc7-a143-76946c12b889\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.519444 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7258e3aa-2eb9-4bc7-a143-76946c12b889-logs\") pod \"7258e3aa-2eb9-4bc7-a143-76946c12b889\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.519529 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-public-tls-certs\") pod \"7258e3aa-2eb9-4bc7-a143-76946c12b889\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.519690 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-combined-ca-bundle\") pod \"7258e3aa-2eb9-4bc7-a143-76946c12b889\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.519765 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jb8n\" (UniqueName: \"kubernetes.io/projected/7258e3aa-2eb9-4bc7-a143-76946c12b889-kube-api-access-5jb8n\") pod \"7258e3aa-2eb9-4bc7-a143-76946c12b889\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.520075 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7258e3aa-2eb9-4bc7-a143-76946c12b889-logs" (OuterVolumeSpecName: "logs") pod "7258e3aa-2eb9-4bc7-a143-76946c12b889" (UID: "7258e3aa-2eb9-4bc7-a143-76946c12b889"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.520521 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7258e3aa-2eb9-4bc7-a143-76946c12b889-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.536605 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7258e3aa-2eb9-4bc7-a143-76946c12b889-kube-api-access-5jb8n" (OuterVolumeSpecName: "kube-api-access-5jb8n") pod "7258e3aa-2eb9-4bc7-a143-76946c12b889" (UID: "7258e3aa-2eb9-4bc7-a143-76946c12b889"). InnerVolumeSpecName "kube-api-access-5jb8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.579676 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-config-data" (OuterVolumeSpecName: "config-data") pod "7258e3aa-2eb9-4bc7-a143-76946c12b889" (UID: "7258e3aa-2eb9-4bc7-a143-76946c12b889"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.585732 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7258e3aa-2eb9-4bc7-a143-76946c12b889" (UID: "7258e3aa-2eb9-4bc7-a143-76946c12b889"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.623860 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jb8n\" (UniqueName: \"kubernetes.io/projected/7258e3aa-2eb9-4bc7-a143-76946c12b889-kube-api-access-5jb8n\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.623896 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.623910 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.638453 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7258e3aa-2eb9-4bc7-a143-76946c12b889" (UID: "7258e3aa-2eb9-4bc7-a143-76946c12b889"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.643343 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7258e3aa-2eb9-4bc7-a143-76946c12b889" (UID: "7258e3aa-2eb9-4bc7-a143-76946c12b889"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.673442 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sn5lq" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" probeResult="failure" output=< Jan 28 18:41:12 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:41:12 crc kubenswrapper[4985]: > Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.728537 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.728607 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.746363 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.769458 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.805474 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.830007 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-combined-ca-bundle\") pod \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.830365 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xphwl\" (UniqueName: \"kubernetes.io/projected/938ef95c-9a4f-4f1e-b92c-8c16f0043102-kube-api-access-xphwl\") pod \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.830406 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-config-data\") pod \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.832338 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833349 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" containerName="nova-scheduler-scheduler" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833383 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" containerName="nova-scheduler-scheduler" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833407 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-api" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833415 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-api" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833432 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-evaluator" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833610 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-evaluator" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833632 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerName="dnsmasq-dns" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833647 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerName="dnsmasq-dns" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833663 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerName="init" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833671 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerName="init" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833694 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-notifier" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833703 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-notifier" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833718 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-api" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833726 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-api" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833744 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabefa44-123b-48ce-a38b-8c5f6ed32b73" containerName="nova-manage" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833753 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabefa44-123b-48ce-a38b-8c5f6ed32b73" containerName="nova-manage" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833769 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-metadata" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833777 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-metadata" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833793 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-log" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833800 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-log" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833819 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-log" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833827 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-log" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833865 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-listener" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833874 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-listener" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834202 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-log" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834225 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-metadata" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834236 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-notifier" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834265 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabefa44-123b-48ce-a38b-8c5f6ed32b73" containerName="nova-manage" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834278 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" containerName="nova-scheduler-scheduler" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834307 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-log" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834318 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-api" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834336 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-listener" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834348 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-evaluator" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834363 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerName="dnsmasq-dns" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834374 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-api" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.836787 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.841029 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.841335 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.852474 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938ef95c-9a4f-4f1e-b92c-8c16f0043102-kube-api-access-xphwl" (OuterVolumeSpecName: "kube-api-access-xphwl") pod "938ef95c-9a4f-4f1e-b92c-8c16f0043102" (UID: "938ef95c-9a4f-4f1e-b92c-8c16f0043102"). InnerVolumeSpecName "kube-api-access-xphwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.866854 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.882931 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-config-data" (OuterVolumeSpecName: "config-data") pod "938ef95c-9a4f-4f1e-b92c-8c16f0043102" (UID: "938ef95c-9a4f-4f1e-b92c-8c16f0043102"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.892335 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "938ef95c-9a4f-4f1e-b92c-8c16f0043102" (UID: "938ef95c-9a4f-4f1e-b92c-8c16f0043102"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.934427 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.934529 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7zwn\" (UniqueName: \"kubernetes.io/projected/7d99eaa1-3945-4192-9d61-7668d944bc63-kube-api-access-t7zwn\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.934863 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-config-data\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.935157 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.935687 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d99eaa1-3945-4192-9d61-7668d944bc63-logs\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.938096 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.938145 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xphwl\" (UniqueName: \"kubernetes.io/projected/938ef95c-9a4f-4f1e-b92c-8c16f0043102-kube-api-access-xphwl\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.938164 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.040440 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d99eaa1-3945-4192-9d61-7668d944bc63-logs\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.040640 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.040687 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7zwn\" (UniqueName: \"kubernetes.io/projected/7d99eaa1-3945-4192-9d61-7668d944bc63-kube-api-access-t7zwn\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.040792 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-config-data\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.040893 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.041557 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d99eaa1-3945-4192-9d61-7668d944bc63-logs\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.046278 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-config-data\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.047070 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.050933 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.061858 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7zwn\" (UniqueName: \"kubernetes.io/projected/7d99eaa1-3945-4192-9d61-7668d944bc63-kube-api-access-t7zwn\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.137582 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.137568 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"938ef95c-9a4f-4f1e-b92c-8c16f0043102","Type":"ContainerDied","Data":"8d462a40beef6fc701ba91c721938ba8a5ec0c9999812346c5f163a3e951b156"} Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.138347 4985 scope.go:117] "RemoveContainer" containerID="047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.141812 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerDied","Data":"0e67457eae33c25cf3a4581aecdd202fe5ea7cb4f78ba1758d22e2ed33abfd6b"} Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.141822 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.144342 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7258e3aa-2eb9-4bc7-a143-76946c12b889","Type":"ContainerDied","Data":"c90565f788cfb36cdadf74a3373459a040e9f918b36e0c76ca75c9290bca74e9"} Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.144484 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.194943 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.198124 4985 scope.go:117] "RemoveContainer" containerID="116b4a8f5e3104f46338144e21ea08411d9e0947488b95acdc8fa986fd480e55" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.202322 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.225465 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.241884 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.255468 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.257351 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.263599 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.296981 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" path="/var/lib/kubelet/pods/938ef95c-9a4f-4f1e-b92c-8c16f0043102/volumes" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.297838 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" path="/var/lib/kubelet/pods/9aa1f962-f78d-41dc-a567-7c749f53ce57/volumes" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.298521 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.298552 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.303566 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.323344 4985 scope.go:117] "RemoveContainer" containerID="45ae2f94d58662256dd9e3846658d96a9b1c7b7c477db901916e216192ebd2f3" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.325722 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.331201 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.332784 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.336021 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.336231 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.336373 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.336385 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bbsjj" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.340178 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.348472 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68dv9\" (UniqueName: \"kubernetes.io/projected/bdade9ba-ba1b-4093-bc40-73f68c84615f-kube-api-access-68dv9\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.349658 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdade9ba-ba1b-4093-bc40-73f68c84615f-config-data\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.349693 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdade9ba-ba1b-4093-bc40-73f68c84615f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.359192 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.366107 4985 scope.go:117] "RemoveContainer" containerID="5fe594e43016038bb82553490c959e421cf981ca7b939b3fb56693d76b19142d" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.379225 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.383107 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.388408 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.388787 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.389031 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.389436 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.426432 4985 scope.go:117] "RemoveContainer" containerID="cb1badf43fc5d99f4394e22eeadf7de3507d22dd49f7bc8d099cbb13b55d6eea" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.452945 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-public-tls-certs\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454070 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454101 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68dv9\" (UniqueName: \"kubernetes.io/projected/bdade9ba-ba1b-4093-bc40-73f68c84615f-kube-api-access-68dv9\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454274 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-config-data\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454340 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-internal-tls-certs\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454397 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-scripts\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454531 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdade9ba-ba1b-4093-bc40-73f68c84615f-config-data\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454564 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdade9ba-ba1b-4093-bc40-73f68c84615f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454635 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rndb9\" (UniqueName: \"kubernetes.io/projected/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-kube-api-access-rndb9\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.465081 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdade9ba-ba1b-4093-bc40-73f68c84615f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.469798 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdade9ba-ba1b-4093-bc40-73f68c84615f-config-data\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.477897 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68dv9\" (UniqueName: \"kubernetes.io/projected/bdade9ba-ba1b-4093-bc40-73f68c84615f-kube-api-access-68dv9\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.556772 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rndb9\" (UniqueName: \"kubernetes.io/projected/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-kube-api-access-rndb9\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.556859 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-config-data\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.556958 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t86tb\" (UniqueName: \"kubernetes.io/projected/11eaf6b3-7169-4587-af33-68f04428e630-kube-api-access-t86tb\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.556997 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557068 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11eaf6b3-7169-4587-af33-68f04428e630-logs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557234 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-public-tls-certs\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557305 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557368 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-config-data\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557388 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-internal-tls-certs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557414 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-internal-tls-certs\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557469 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-scripts\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557517 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-public-tls-certs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.566132 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.566218 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-internal-tls-certs\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.566324 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-config-data\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.572759 4985 scope.go:117] "RemoveContainer" containerID="091866b67d722b85f66b348b87fcb2e2785f91d8fccccba9f3e2b09885d4aade" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.575480 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-public-tls-certs\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.591706 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-scripts\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.596149 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.597239 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rndb9\" (UniqueName: \"kubernetes.io/projected/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-kube-api-access-rndb9\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.654368 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.659511 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-internal-tls-certs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.660126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-public-tls-certs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.660521 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-config-data\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.660583 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t86tb\" (UniqueName: \"kubernetes.io/projected/11eaf6b3-7169-4587-af33-68f04428e630-kube-api-access-t86tb\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.660606 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.661148 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11eaf6b3-7169-4587-af33-68f04428e630-logs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.661517 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11eaf6b3-7169-4587-af33-68f04428e630-logs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.663049 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-internal-tls-certs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.666085 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.666131 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-public-tls-certs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.667847 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-config-data\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.682857 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t86tb\" (UniqueName: \"kubernetes.io/projected/11eaf6b3-7169-4587-af33-68f04428e630-kube-api-access-t86tb\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.738630 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.755073 4985 scope.go:117] "RemoveContainer" containerID="1a251a8091ad2d86f44bec193d866720c1dfdcafe9383258c1b57b5edba7d8dc" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.817494 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: W0128 18:41:13.843706 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d99eaa1_3945_4192_9d61_7668d944bc63.slice/crio-c1b1b1c71c676689b1ac66ce78be53cc98fe0ab7f13cfd561d142c3e25661d06 WatchSource:0}: Error finding container c1b1b1c71c676689b1ac66ce78be53cc98fe0ab7f13cfd561d142c3e25661d06: Status 404 returned error can't find the container with id c1b1b1c71c676689b1ac66ce78be53cc98fe0ab7f13cfd561d142c3e25661d06 Jan 28 18:41:14 crc kubenswrapper[4985]: I0128 18:41:14.132991 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:41:14 crc kubenswrapper[4985]: W0128 18:41:14.142444 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdade9ba_ba1b_4093_bc40_73f68c84615f.slice/crio-11e7d1976880568de7f209c49fdfd85b93388e3216f4220a67f14e8fe47407b5 WatchSource:0}: Error finding container 11e7d1976880568de7f209c49fdfd85b93388e3216f4220a67f14e8fe47407b5: Status 404 returned error can't find the container with id 11e7d1976880568de7f209c49fdfd85b93388e3216f4220a67f14e8fe47407b5 Jan 28 18:41:14 crc kubenswrapper[4985]: I0128 18:41:14.162079 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7d99eaa1-3945-4192-9d61-7668d944bc63","Type":"ContainerStarted","Data":"cb4e77165d4c242fceac190bae312018f4bf5ba3d1b964f0f395b55804829001"} Jan 28 18:41:14 crc kubenswrapper[4985]: I0128 18:41:14.162130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7d99eaa1-3945-4192-9d61-7668d944bc63","Type":"ContainerStarted","Data":"c1b1b1c71c676689b1ac66ce78be53cc98fe0ab7f13cfd561d142c3e25661d06"} Jan 28 18:41:14 crc kubenswrapper[4985]: I0128 18:41:14.164435 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bdade9ba-ba1b-4093-bc40-73f68c84615f","Type":"ContainerStarted","Data":"11e7d1976880568de7f209c49fdfd85b93388e3216f4220a67f14e8fe47407b5"} Jan 28 18:41:14 crc kubenswrapper[4985]: I0128 18:41:14.287580 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 18:41:14 crc kubenswrapper[4985]: W0128 18:41:14.381623 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11eaf6b3_7169_4587_af33_68f04428e630.slice/crio-034ae2c0070804e7db1906adfc26624a7f8fe2b13ab6eba51dcd4a3411e80586 WatchSource:0}: Error finding container 034ae2c0070804e7db1906adfc26624a7f8fe2b13ab6eba51dcd4a3411e80586: Status 404 returned error can't find the container with id 034ae2c0070804e7db1906adfc26624a7f8fe2b13ab6eba51dcd4a3411e80586 Jan 28 18:41:14 crc kubenswrapper[4985]: I0128 18:41:14.390928 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.184566 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11eaf6b3-7169-4587-af33-68f04428e630","Type":"ContainerStarted","Data":"2da76c7b42f6e653a658e564cc2f54b45b0ed659bf455b1ce5864b0d1b7b80db"} Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.184914 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11eaf6b3-7169-4587-af33-68f04428e630","Type":"ContainerStarted","Data":"42fd70cf0dd54e6443e4a2a0fa1c29031e80910dccef760736776a3c20cf849f"} Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.184933 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11eaf6b3-7169-4587-af33-68f04428e630","Type":"ContainerStarted","Data":"034ae2c0070804e7db1906adfc26624a7f8fe2b13ab6eba51dcd4a3411e80586"} Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.188821 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7d99eaa1-3945-4192-9d61-7668d944bc63","Type":"ContainerStarted","Data":"62c59e17b831dbd248c35901ed743b75a136fc04a9d8bdbf20cf7202fb2a2f48"} Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.192508 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bdade9ba-ba1b-4093-bc40-73f68c84615f","Type":"ContainerStarted","Data":"1efca71695f8186c9bc5d99e0fbbf2c7fca3405a714627a13e17d76b0b7042a7"} Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.194507 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerStarted","Data":"bc5e5343b1013225c0f09fa05053ffaef8f092c7d05aeab8940382306b98a83a"} Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.224286 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.224267618 podStartE2EDuration="3.224267618s" podCreationTimestamp="2026-01-28 18:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:41:15.211468766 +0000 UTC m=+1686.038031597" watchObservedRunningTime="2026-01-28 18:41:15.224267618 +0000 UTC m=+1686.050830439" Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.315825 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" path="/var/lib/kubelet/pods/1901b8df-d418-45ea-8d73-c6ffbf3a0da5/volumes" Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.317776 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" path="/var/lib/kubelet/pods/7258e3aa-2eb9-4bc7-a143-76946c12b889/volumes" Jan 28 18:41:16 crc kubenswrapper[4985]: I0128 18:41:16.215824 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerStarted","Data":"352c03bb8c26c1882850fe5aac45fc2c005c430ba571346b869f13a0a01a7ae7"} Jan 28 18:41:16 crc kubenswrapper[4985]: I0128 18:41:16.253610 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.253583024 podStartE2EDuration="3.253583024s" podCreationTimestamp="2026-01-28 18:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:41:16.244641911 +0000 UTC m=+1687.071204772" watchObservedRunningTime="2026-01-28 18:41:16.253583024 +0000 UTC m=+1687.080145845" Jan 28 18:41:16 crc kubenswrapper[4985]: I0128 18:41:16.261877 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.261853617 podStartE2EDuration="3.261853617s" podCreationTimestamp="2026-01-28 18:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:41:15.237187642 +0000 UTC m=+1686.063750463" watchObservedRunningTime="2026-01-28 18:41:16.261853617 +0000 UTC m=+1687.088416438" Jan 28 18:41:16 crc kubenswrapper[4985]: I0128 18:41:16.264239 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:41:16 crc kubenswrapper[4985]: E0128 18:41:16.264726 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:41:17 crc kubenswrapper[4985]: I0128 18:41:17.238022 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerStarted","Data":"a5427ec62937c76e656c69cbc0cb1d25355ec92c6e45ce8c43e5e2fc0b2aa895"} Jan 28 18:41:18 crc kubenswrapper[4985]: I0128 18:41:18.204455 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:41:18 crc kubenswrapper[4985]: I0128 18:41:18.205483 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:41:18 crc kubenswrapper[4985]: I0128 18:41:18.597119 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 18:41:19 crc kubenswrapper[4985]: I0128 18:41:19.262230 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerStarted","Data":"0ca922d725193f731de31c12f898c60af2c134f41e240b2f16a4ae9def302a65"} Jan 28 18:41:21 crc kubenswrapper[4985]: I0128 18:41:21.475073 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 18:41:22 crc kubenswrapper[4985]: I0128 18:41:22.320973 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerStarted","Data":"3f619d361f2082394dafaa75e905aac02d4c442e242a675a1f30d1c46ea1e731"} Jan 28 18:41:22 crc kubenswrapper[4985]: I0128 18:41:22.349496 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.788268824 podStartE2EDuration="9.349478909s" podCreationTimestamp="2026-01-28 18:41:13 +0000 UTC" firstStartedPulling="2026-01-28 18:41:14.305120963 +0000 UTC m=+1685.131683784" lastFinishedPulling="2026-01-28 18:41:20.866331028 +0000 UTC m=+1691.692893869" observedRunningTime="2026-01-28 18:41:22.343277704 +0000 UTC m=+1693.169840535" watchObservedRunningTime="2026-01-28 18:41:22.349478909 +0000 UTC m=+1693.176041740" Jan 28 18:41:22 crc kubenswrapper[4985]: I0128 18:41:22.689182 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sn5lq" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" probeResult="failure" output=< Jan 28 18:41:22 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:41:22 crc kubenswrapper[4985]: > Jan 28 18:41:23 crc kubenswrapper[4985]: I0128 18:41:23.204377 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:41:23 crc kubenswrapper[4985]: I0128 18:41:23.204619 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:41:23 crc kubenswrapper[4985]: I0128 18:41:23.597197 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 18:41:23 crc kubenswrapper[4985]: I0128 18:41:23.628687 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 18:41:23 crc kubenswrapper[4985]: I0128 18:41:23.738458 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:41:23 crc kubenswrapper[4985]: I0128 18:41:23.738507 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:41:24 crc kubenswrapper[4985]: I0128 18:41:24.221559 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7d99eaa1-3945-4192-9d61-7668d944bc63" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:41:24 crc kubenswrapper[4985]: I0128 18:41:24.221612 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7d99eaa1-3945-4192-9d61-7668d944bc63" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:41:24 crc kubenswrapper[4985]: I0128 18:41:24.382645 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 18:41:24 crc kubenswrapper[4985]: I0128 18:41:24.750513 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="11eaf6b3-7169-4587-af33-68f04428e630" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:41:24 crc kubenswrapper[4985]: I0128 18:41:24.760465 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="11eaf6b3-7169-4587-af33-68f04428e630" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:41:29 crc kubenswrapper[4985]: I0128 18:41:29.264544 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:41:29 crc kubenswrapper[4985]: E0128 18:41:29.265223 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:41:32 crc kubenswrapper[4985]: I0128 18:41:32.667169 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sn5lq" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" probeResult="failure" output=< Jan 28 18:41:32 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:41:32 crc kubenswrapper[4985]: > Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.210085 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.215613 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.217504 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.490960 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.746665 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.746786 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.747470 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.747507 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.756985 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.758923 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:41:41 crc kubenswrapper[4985]: I0128 18:41:41.670571 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:41:41 crc kubenswrapper[4985]: I0128 18:41:41.734099 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:41:41 crc kubenswrapper[4985]: I0128 18:41:41.925103 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sn5lq"] Jan 28 18:41:43 crc kubenswrapper[4985]: I0128 18:41:43.632289 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sn5lq" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" containerID="cri-o://a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849" gracePeriod=2 Jan 28 18:41:43 crc kubenswrapper[4985]: I0128 18:41:43.836925 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-qjrfx"] Jan 28 18:41:43 crc kubenswrapper[4985]: I0128 18:41:43.858749 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-qjrfx"] Jan 28 18:41:43 crc kubenswrapper[4985]: I0128 18:41:43.918185 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-r7ml7"] Jan 28 18:41:43 crc kubenswrapper[4985]: I0128 18:41:43.920217 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:43 crc kubenswrapper[4985]: I0128 18:41:43.946437 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-r7ml7"] Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.012898 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-combined-ca-bundle\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.012958 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-config-data\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.013097 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9vtx\" (UniqueName: \"kubernetes.io/projected/627220be-fa5f-49a6-9c9e-b3ae5e49afec-kube-api-access-r9vtx\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.115911 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-combined-ca-bundle\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.116182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-config-data\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.116362 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9vtx\" (UniqueName: \"kubernetes.io/projected/627220be-fa5f-49a6-9c9e-b3ae5e49afec-kube-api-access-r9vtx\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.123725 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-combined-ca-bundle\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.124053 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-config-data\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.135592 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9vtx\" (UniqueName: \"kubernetes.io/projected/627220be-fa5f-49a6-9c9e-b3ae5e49afec-kube-api-access-r9vtx\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.245732 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.260135 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.263374 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:41:44 crc kubenswrapper[4985]: E0128 18:41:44.263656 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.422449 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-catalog-content\") pod \"fe3dd10e-5081-4256-9c08-e2be3557bf65\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.422864 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blw8m\" (UniqueName: \"kubernetes.io/projected/fe3dd10e-5081-4256-9c08-e2be3557bf65-kube-api-access-blw8m\") pod \"fe3dd10e-5081-4256-9c08-e2be3557bf65\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.422984 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-utilities\") pod \"fe3dd10e-5081-4256-9c08-e2be3557bf65\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.423921 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-utilities" (OuterVolumeSpecName: "utilities") pod "fe3dd10e-5081-4256-9c08-e2be3557bf65" (UID: "fe3dd10e-5081-4256-9c08-e2be3557bf65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.424493 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.427406 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3dd10e-5081-4256-9c08-e2be3557bf65-kube-api-access-blw8m" (OuterVolumeSpecName: "kube-api-access-blw8m") pod "fe3dd10e-5081-4256-9c08-e2be3557bf65" (UID: "fe3dd10e-5081-4256-9c08-e2be3557bf65"). InnerVolumeSpecName "kube-api-access-blw8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.491825 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe3dd10e-5081-4256-9c08-e2be3557bf65" (UID: "fe3dd10e-5081-4256-9c08-e2be3557bf65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.526728 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blw8m\" (UniqueName: \"kubernetes.io/projected/fe3dd10e-5081-4256-9c08-e2be3557bf65-kube-api-access-blw8m\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.526773 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.644297 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerID="a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849" exitCode=0 Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.644351 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerDied","Data":"a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849"} Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.644417 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerDied","Data":"1118f0c768bed110a3a9b05d6637f78ab5e21ee7e674a7222c90a1b7f83294fd"} Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.644415 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.644443 4985 scope.go:117] "RemoveContainer" containerID="a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.678744 4985 scope.go:117] "RemoveContainer" containerID="e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.689770 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sn5lq"] Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.700909 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sn5lq"] Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.715343 4985 scope.go:117] "RemoveContainer" containerID="83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.738236 4985 scope.go:117] "RemoveContainer" containerID="a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849" Jan 28 18:41:44 crc kubenswrapper[4985]: E0128 18:41:44.738694 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849\": container with ID starting with a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849 not found: ID does not exist" containerID="a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.738726 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849"} err="failed to get container status \"a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849\": rpc error: code = NotFound desc = could not find container \"a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849\": container with ID starting with a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849 not found: ID does not exist" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.738747 4985 scope.go:117] "RemoveContainer" containerID="e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136" Jan 28 18:41:44 crc kubenswrapper[4985]: E0128 18:41:44.739177 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136\": container with ID starting with e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136 not found: ID does not exist" containerID="e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.739225 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136"} err="failed to get container status \"e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136\": rpc error: code = NotFound desc = could not find container \"e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136\": container with ID starting with e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136 not found: ID does not exist" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.739277 4985 scope.go:117] "RemoveContainer" containerID="83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724" Jan 28 18:41:44 crc kubenswrapper[4985]: E0128 18:41:44.739612 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724\": container with ID starting with 83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724 not found: ID does not exist" containerID="83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.739644 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724"} err="failed to get container status \"83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724\": rpc error: code = NotFound desc = could not find container \"83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724\": container with ID starting with 83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724 not found: ID does not exist" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.807306 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-r7ml7"] Jan 28 18:41:45 crc kubenswrapper[4985]: I0128 18:41:45.304339 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dda9fdbc-ce81-4e63-b32f-733379d893d4" path="/var/lib/kubelet/pods/dda9fdbc-ce81-4e63-b32f-733379d893d4/volumes" Jan 28 18:41:45 crc kubenswrapper[4985]: I0128 18:41:45.307072 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" path="/var/lib/kubelet/pods/fe3dd10e-5081-4256-9c08-e2be3557bf65/volumes" Jan 28 18:41:45 crc kubenswrapper[4985]: I0128 18:41:45.675780 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r7ml7" event={"ID":"627220be-fa5f-49a6-9c9e-b3ae5e49afec","Type":"ContainerStarted","Data":"319bf1dcb8102c51957853cf08d45a01f4387e66993d72cad23092e9e3dddb4f"} Jan 28 18:41:45 crc kubenswrapper[4985]: I0128 18:41:45.779153 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:41:46 crc kubenswrapper[4985]: I0128 18:41:46.643453 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:41:46 crc kubenswrapper[4985]: I0128 18:41:46.644079 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-central-agent" containerID="cri-o://b945ecd85cb2d6c7bb07e875ec3e1e57a0f59ee2eb03cf09cfc003be7f2c0ad0" gracePeriod=30 Jan 28 18:41:46 crc kubenswrapper[4985]: I0128 18:41:46.644105 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="sg-core" containerID="cri-o://4649bed0f7e2d88fd12f9c7284945a04a799e7c6515875078e092e9a5114b1ba" gracePeriod=30 Jan 28 18:41:46 crc kubenswrapper[4985]: I0128 18:41:46.644192 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-notification-agent" containerID="cri-o://2abc407d0b012d9d9eec8a48e74a309321192094aaee78b70f6990073a7856e0" gracePeriod=30 Jan 28 18:41:46 crc kubenswrapper[4985]: I0128 18:41:46.644432 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="proxy-httpd" containerID="cri-o://fece39157ded0ea37a252872cc2390f006a1bb017033fdc56f58780de2bd7236" gracePeriod=30 Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.092432 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.705991 4985 generic.go:334] "Generic (PLEG): container finished" podID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerID="fece39157ded0ea37a252872cc2390f006a1bb017033fdc56f58780de2bd7236" exitCode=0 Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.706030 4985 generic.go:334] "Generic (PLEG): container finished" podID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerID="4649bed0f7e2d88fd12f9c7284945a04a799e7c6515875078e092e9a5114b1ba" exitCode=2 Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.706043 4985 generic.go:334] "Generic (PLEG): container finished" podID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerID="b945ecd85cb2d6c7bb07e875ec3e1e57a0f59ee2eb03cf09cfc003be7f2c0ad0" exitCode=0 Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.706055 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerDied","Data":"fece39157ded0ea37a252872cc2390f006a1bb017033fdc56f58780de2bd7236"} Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.706090 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerDied","Data":"4649bed0f7e2d88fd12f9c7284945a04a799e7c6515875078e092e9a5114b1ba"} Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.706102 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerDied","Data":"b945ecd85cb2d6c7bb07e875ec3e1e57a0f59ee2eb03cf09cfc003be7f2c0ad0"} Jan 28 18:41:48 crc kubenswrapper[4985]: I0128 18:41:48.746793 4985 generic.go:334] "Generic (PLEG): container finished" podID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerID="2abc407d0b012d9d9eec8a48e74a309321192094aaee78b70f6990073a7856e0" exitCode=0 Jan 28 18:41:48 crc kubenswrapper[4985]: I0128 18:41:48.747383 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerDied","Data":"2abc407d0b012d9d9eec8a48e74a309321192094aaee78b70f6990073a7856e0"} Jan 28 18:41:48 crc kubenswrapper[4985]: I0128 18:41:48.970540 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.156553 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-config-data\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.156712 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-scripts\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.156853 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-log-httpd\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.156886 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-run-httpd\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.157012 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vnll\" (UniqueName: \"kubernetes.io/projected/9079aa62-2b93-4559-bff4-af80b69e23a7-kube-api-access-5vnll\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.157050 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-combined-ca-bundle\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.157164 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-ceilometer-tls-certs\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.157210 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-sg-core-conf-yaml\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.157850 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.158205 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.161017 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.176649 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9079aa62-2b93-4559-bff4-af80b69e23a7-kube-api-access-5vnll" (OuterVolumeSpecName: "kube-api-access-5vnll") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "kube-api-access-5vnll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.177038 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-scripts" (OuterVolumeSpecName: "scripts") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.262189 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.262267 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.262279 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vnll\" (UniqueName: \"kubernetes.io/projected/9079aa62-2b93-4559-bff4-af80b69e23a7-kube-api-access-5vnll\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.272438 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.303089 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.328562 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.364483 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.365230 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.365280 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.408440 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-config-data" (OuterVolumeSpecName: "config-data") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.469513 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.763007 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerDied","Data":"99f6a59231cb74972d7065e16a91981feb750820d3a47ac21d46c1a8419a7fb5"} Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.763232 4985 scope.go:117] "RemoveContainer" containerID="fece39157ded0ea37a252872cc2390f006a1bb017033fdc56f58780de2bd7236" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.763236 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.860463 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.881556 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.908355 4985 scope.go:117] "RemoveContainer" containerID="4649bed0f7e2d88fd12f9c7284945a04a799e7c6515875078e092e9a5114b1ba" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.950064 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951320 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="sg-core" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951337 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="sg-core" Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951363 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-notification-agent" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951371 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-notification-agent" Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951392 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="proxy-httpd" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951400 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="proxy-httpd" Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951557 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="extract-utilities" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951569 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="extract-utilities" Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951598 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951619 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951632 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-central-agent" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951640 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-central-agent" Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951654 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="extract-content" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951662 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="extract-content" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.952033 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="proxy-httpd" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.952073 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="sg-core" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.952093 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.952111 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-central-agent" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.952140 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-notification-agent" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.955173 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.959595 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.959851 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.963827 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.969185 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.017616 4985 scope.go:117] "RemoveContainer" containerID="2abc407d0b012d9d9eec8a48e74a309321192094aaee78b70f6990073a7856e0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.084505 4985 scope.go:117] "RemoveContainer" containerID="b945ecd85cb2d6c7bb07e875ec3e1e57a0f59ee2eb03cf09cfc003be7f2c0ad0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.094951 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-config-data\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.095120 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.095341 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxcg\" (UniqueName: \"kubernetes.io/projected/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-kube-api-access-mzxcg\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.095425 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-run-httpd\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.095759 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-scripts\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.096034 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-log-httpd\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.096079 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.096183 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.198808 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.198890 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.198958 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-config-data\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199004 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199098 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzxcg\" (UniqueName: \"kubernetes.io/projected/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-kube-api-access-mzxcg\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199149 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-run-httpd\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199477 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-scripts\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199570 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-log-httpd\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199924 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-run-httpd\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199993 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-log-httpd\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.203863 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.204614 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.204794 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-config-data\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.210790 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.217365 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-scripts\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.227850 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzxcg\" (UniqueName: \"kubernetes.io/projected/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-kube-api-access-mzxcg\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.297956 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.870179 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.941207 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="rabbitmq" containerID="cri-o://1d8b169a7d964359c8bd6733d67d45546c1c642e159163c5b350061cce51fd25" gracePeriod=604795 Jan 28 18:41:51 crc kubenswrapper[4985]: I0128 18:41:51.282389 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" path="/var/lib/kubelet/pods/9079aa62-2b93-4559-bff4-af80b69e23a7/volumes" Jan 28 18:41:51 crc kubenswrapper[4985]: I0128 18:41:51.714076 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="rabbitmq" containerID="cri-o://aca2d63153078144b7f42a325b0b7ca02eb87cda15e02f68bf7871b8a8ca688c" gracePeriod=604796 Jan 28 18:41:51 crc kubenswrapper[4985]: I0128 18:41:51.797772 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerStarted","Data":"034d5baa8d85116bd4079fc576f9bfd89326c5aef395eac6b4985a13d07cd61a"} Jan 28 18:41:57 crc kubenswrapper[4985]: I0128 18:41:57.935372 4985 generic.go:334] "Generic (PLEG): container finished" podID="9549037f-5867-44ac-86dc-a02105e4c414" containerID="1d8b169a7d964359c8bd6733d67d45546c1c642e159163c5b350061cce51fd25" exitCode=0 Jan 28 18:41:57 crc kubenswrapper[4985]: I0128 18:41:57.935877 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9549037f-5867-44ac-86dc-a02105e4c414","Type":"ContainerDied","Data":"1d8b169a7d964359c8bd6733d67d45546c1c642e159163c5b350061cce51fd25"} Jan 28 18:41:58 crc kubenswrapper[4985]: I0128 18:41:58.951278 4985 generic.go:334] "Generic (PLEG): container finished" podID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerID="aca2d63153078144b7f42a325b0b7ca02eb87cda15e02f68bf7871b8a8ca688c" exitCode=0 Jan 28 18:41:58 crc kubenswrapper[4985]: I0128 18:41:58.951379 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"41c1858c-ad6e-441f-b998-c57290cc5d68","Type":"ContainerDied","Data":"aca2d63153078144b7f42a325b0b7ca02eb87cda15e02f68bf7871b8a8ca688c"} Jan 28 18:41:59 crc kubenswrapper[4985]: I0128 18:41:59.267137 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:41:59 crc kubenswrapper[4985]: E0128 18:41:59.267877 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.188369 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.195283 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.297603 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-confd\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.306426 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.306506 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-erlang-cookie\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.306686 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdmbb\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-kube-api-access-pdmbb\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.306715 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-tls\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.306769 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-plugins\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309056 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309120 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-plugins-conf\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309161 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-plugins-conf\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309191 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9549037f-5867-44ac-86dc-a02105e4c414-pod-info\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309226 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td8ql\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-kube-api-access-td8ql\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309292 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-tls\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309327 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-config-data\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309392 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-erlang-cookie\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.318799 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-plugins\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.318926 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-server-conf\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.319052 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/41c1858c-ad6e-441f-b998-c57290cc5d68-pod-info\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.319081 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-config-data\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.319130 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/41c1858c-ad6e-441f-b998-c57290cc5d68-erlang-cookie-secret\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.319173 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-confd\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.319213 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9549037f-5867-44ac-86dc-a02105e4c414-erlang-cookie-secret\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.319282 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-server-conf\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.335855 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.336726 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.336808 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.336879 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.343050 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.343202 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-kube-api-access-pdmbb" (OuterVolumeSpecName: "kube-api-access-pdmbb") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "kube-api-access-pdmbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.352116 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-kube-api-access-td8ql" (OuterVolumeSpecName: "kube-api-access-td8ql") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "kube-api-access-td8ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.352334 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9549037f-5867-44ac-86dc-a02105e4c414-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.352879 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.353231 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c1858c-ad6e-441f-b998-c57290cc5d68-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.358572 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.379977 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/41c1858c-ad6e-441f-b998-c57290cc5d68-pod-info" (OuterVolumeSpecName: "pod-info") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.384917 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.430221 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde" (OuterVolumeSpecName: "persistence") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.433967 4985 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434004 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td8ql\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-kube-api-access-td8ql\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434016 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434026 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434034 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434046 4985 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/41c1858c-ad6e-441f-b998-c57290cc5d68-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434055 4985 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/41c1858c-ad6e-441f-b998-c57290cc5d68-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434064 4985 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9549037f-5867-44ac-86dc-a02105e4c414-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434074 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434086 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdmbb\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-kube-api-access-pdmbb\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434096 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434104 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434132 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") on node \"crc\" " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434142 4985 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.447287 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9549037f-5867-44ac-86dc-a02105e4c414-pod-info" (OuterVolumeSpecName: "pod-info") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.450394 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-config-data" (OuterVolumeSpecName: "config-data") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: E0128 18:42:01.456682 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28 podName:9549037f-5867-44ac-86dc-a02105e4c414 nodeName:}" failed. No retries permitted until 2026-01-28 18:42:01.956652493 +0000 UTC m=+1732.783215314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.494525 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.515761 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde") on node "crc" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.533610 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-server-conf" (OuterVolumeSpecName: "server-conf") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.539603 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.541596 4985 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.541833 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.541945 4985 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9549037f-5867-44ac-86dc-a02105e4c414-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.552889 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-config-data" (OuterVolumeSpecName: "config-data") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.553009 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-server-conf" (OuterVolumeSpecName: "server-conf") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.594978 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.610912 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.645116 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.645148 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.645159 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.645169 4985 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.989566 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"41c1858c-ad6e-441f-b998-c57290cc5d68","Type":"ContainerDied","Data":"f0ff3c53025b9ae422df2e7cccc0ec25b7dd495fd74546696ee043e91187bb41"} Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.989610 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.989624 4985 scope.go:117] "RemoveContainer" containerID="aca2d63153078144b7f42a325b0b7ca02eb87cda15e02f68bf7871b8a8ca688c" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.997921 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9549037f-5867-44ac-86dc-a02105e4c414","Type":"ContainerDied","Data":"3743df7761e9f95626d5189d3a604fc7ae4f9d57706f392ce36c256fb508d124"} Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.998046 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.029846 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.050281 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.053522 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.073834 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:42:02 crc kubenswrapper[4985]: E0128 18:42:02.074358 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="setup-container" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.074375 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="setup-container" Jan 28 18:42:02 crc kubenswrapper[4985]: E0128 18:42:02.074406 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="rabbitmq" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.074412 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="rabbitmq" Jan 28 18:42:02 crc kubenswrapper[4985]: E0128 18:42:02.074434 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="setup-container" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.074440 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="setup-container" Jan 28 18:42:02 crc kubenswrapper[4985]: E0128 18:42:02.074456 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="rabbitmq" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.074462 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="rabbitmq" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.074696 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="rabbitmq" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.074714 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="rabbitmq" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.075879 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.078691 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.078882 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.079012 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.079210 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-zs2dp" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.079382 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.079517 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.082480 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.093416 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.095986 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28" (OuterVolumeSpecName: "persistence") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "pvc-640fff7e-293b-4d54-bc96-a2aead370a28". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156361 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156430 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156532 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156588 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156642 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156663 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156794 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156826 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156858 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156914 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156942 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74lst\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-kube-api-access-74lst\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.157034 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") on node \"crc\" " Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.200500 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.201104 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-640fff7e-293b-4d54-bc96-a2aead370a28" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28") on node "crc" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.248875 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.261953 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.262403 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.262608 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.262762 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.262913 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.263015 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.263123 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.263426 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.263626 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.265144 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.263897 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.264300 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.263750 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.265645 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.265776 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74lst\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-kube-api-access-74lst\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.265682 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.265853 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.272955 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.279337 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.288854 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.288863 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.288925 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac8bde78162f1032f95f647174ef8183aa4e0f86240347c6b6b8d4a86e7076a1/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.291839 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.306812 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74lst\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-kube-api-access-74lst\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.308304 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.329318 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.333811 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.357698 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374725 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/249a0e05-d210-402f-b7f8-2caf153346d8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374799 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-config-data\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374827 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374882 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374916 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374961 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5gk5\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-kube-api-access-v5gk5\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374981 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.375095 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.375143 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/249a0e05-d210-402f-b7f8-2caf153346d8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.375191 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.375214 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.384650 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.476812 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/249a0e05-d210-402f-b7f8-2caf153346d8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.476897 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.476921 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.476957 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/249a0e05-d210-402f-b7f8-2caf153346d8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.476980 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-config-data\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477008 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477046 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5gk5\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-kube-api-access-v5gk5\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477139 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477727 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477888 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.479122 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.480966 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.481272 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.482481 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.482535 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/18da3f6437b5d54d0b067e2370e468c4fc3f3bb8be36828902e2b198f7e21ef1/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.483169 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/249a0e05-d210-402f-b7f8-2caf153346d8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.483494 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-config-data\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.483770 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.485245 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/249a0e05-d210-402f-b7f8-2caf153346d8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.491067 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.500918 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.502200 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5gk5\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-kube-api-access-v5gk5\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.563005 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.710630 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 28 18:42:03 crc kubenswrapper[4985]: I0128 18:42:03.282133 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" path="/var/lib/kubelet/pods/41c1858c-ad6e-441f-b998-c57290cc5d68/volumes" Jan 28 18:42:03 crc kubenswrapper[4985]: I0128 18:42:03.283822 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9549037f-5867-44ac-86dc-a02105e4c414" path="/var/lib/kubelet/pods/9549037f-5867-44ac-86dc-a02105e4c414/volumes" Jan 28 18:42:04 crc kubenswrapper[4985]: I0128 18:42:04.874700 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: i/o timeout" Jan 28 18:42:04 crc kubenswrapper[4985]: I0128 18:42:04.977933 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: i/o timeout" Jan 28 18:42:09 crc kubenswrapper[4985]: E0128 18:42:09.011685 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 28 18:42:09 crc kubenswrapper[4985]: E0128 18:42:09.012099 4985 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 28 18:42:09 crc kubenswrapper[4985]: E0128 18:42:09.012234 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9vtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-r7ml7_openstack(627220be-fa5f-49a6-9c9e-b3ae5e49afec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:42:09 crc kubenswrapper[4985]: E0128 18:42:09.013993 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-r7ml7" podUID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" Jan 28 18:42:09 crc kubenswrapper[4985]: E0128 18:42:09.105428 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-r7ml7" podUID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.346632 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-h8w5d"] Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.349501 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.353562 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.377034 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-h8w5d"] Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.389458 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-config\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.389530 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdqmc\" (UniqueName: \"kubernetes.io/projected/851ea22a-e43d-4d11-911a-3ec541e6012c-kube-api-access-tdqmc\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.389953 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.390075 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.390184 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.390221 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.390270 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492746 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492842 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492902 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492924 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492947 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492970 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-config\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492996 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdqmc\" (UniqueName: \"kubernetes.io/projected/851ea22a-e43d-4d11-911a-3ec541e6012c-kube-api-access-tdqmc\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.493910 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.493911 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.493954 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.494146 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.494443 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.494561 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-config\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.512319 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdqmc\" (UniqueName: \"kubernetes.io/projected/851ea22a-e43d-4d11-911a-3ec541e6012c-kube-api-access-tdqmc\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.680353 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:10 crc kubenswrapper[4985]: I0128 18:42:10.264344 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:42:10 crc kubenswrapper[4985]: E0128 18:42:10.264592 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:42:12 crc kubenswrapper[4985]: I0128 18:42:12.347198 4985 scope.go:117] "RemoveContainer" containerID="dfcb150ccda2aa4d1050a6d900540fe9f90c22d4f5256e19b6eeee11fa6e624a" Jan 28 18:42:12 crc kubenswrapper[4985]: I0128 18:42:12.435778 4985 scope.go:117] "RemoveContainer" containerID="1d8b169a7d964359c8bd6733d67d45546c1c642e159163c5b350061cce51fd25" Jan 28 18:42:12 crc kubenswrapper[4985]: I0128 18:42:12.563428 4985 scope.go:117] "RemoveContainer" containerID="bb84d317406cd6ce8331d52ba3971c969e272858edb60fe48bf5c6408f6194f8" Jan 28 18:42:12 crc kubenswrapper[4985]: E0128 18:42:12.565792 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 28 18:42:12 crc kubenswrapper[4985]: E0128 18:42:12.565844 4985 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 28 18:42:12 crc kubenswrapper[4985]: E0128 18:42:12.565977 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59bh579h584hbbh688h68h596h647h655h79h55hcch688h694h59chc8h54chb5h8ch568hb7h59fh557hfdh5cbh6h57dh565h656h59h97h65fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzxcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b29b2a3b-ca12-4e1c-8816-0d28cebe2dde): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:42:12 crc kubenswrapper[4985]: I0128 18:42:12.913215 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:42:13 crc kubenswrapper[4985]: W0128 18:42:13.063936 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34d82dad_dc98_4c0f_90c2_0b25f7d73c01.slice/crio-369ff544797e7b294c7889921af8131e263a140baba36588e10421a395b5f4cc WatchSource:0}: Error finding container 369ff544797e7b294c7889921af8131e263a140baba36588e10421a395b5f4cc: Status 404 returned error can't find the container with id 369ff544797e7b294c7889921af8131e263a140baba36588e10421a395b5f4cc Jan 28 18:42:13 crc kubenswrapper[4985]: I0128 18:42:13.066552 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:42:13 crc kubenswrapper[4985]: W0128 18:42:13.157145 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod851ea22a_e43d_4d11_911a_3ec541e6012c.slice/crio-da98239627d3370ef27352d22f95238ce0d007f495ebc106572103880ba5c81e WatchSource:0}: Error finding container da98239627d3370ef27352d22f95238ce0d007f495ebc106572103880ba5c81e: Status 404 returned error can't find the container with id da98239627d3370ef27352d22f95238ce0d007f495ebc106572103880ba5c81e Jan 28 18:42:13 crc kubenswrapper[4985]: I0128 18:42:13.165759 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-h8w5d"] Jan 28 18:42:13 crc kubenswrapper[4985]: I0128 18:42:13.170071 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"34d82dad-dc98-4c0f-90c2-0b25f7d73c01","Type":"ContainerStarted","Data":"369ff544797e7b294c7889921af8131e263a140baba36588e10421a395b5f4cc"} Jan 28 18:42:13 crc kubenswrapper[4985]: I0128 18:42:13.176479 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"249a0e05-d210-402f-b7f8-2caf153346d8","Type":"ContainerStarted","Data":"280dd66feb159a68665caed63df71059c278506556427c060145287e1aedd726"} Jan 28 18:42:14 crc kubenswrapper[4985]: I0128 18:42:14.193085 4985 generic.go:334] "Generic (PLEG): container finished" podID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerID="eb06142e49a896d0f59b1509119df8e1b80f5b08d70235e7d7d845632e5598ca" exitCode=0 Jan 28 18:42:14 crc kubenswrapper[4985]: I0128 18:42:14.193176 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" event={"ID":"851ea22a-e43d-4d11-911a-3ec541e6012c","Type":"ContainerDied","Data":"eb06142e49a896d0f59b1509119df8e1b80f5b08d70235e7d7d845632e5598ca"} Jan 28 18:42:14 crc kubenswrapper[4985]: I0128 18:42:14.194941 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" event={"ID":"851ea22a-e43d-4d11-911a-3ec541e6012c","Type":"ContainerStarted","Data":"da98239627d3370ef27352d22f95238ce0d007f495ebc106572103880ba5c81e"} Jan 28 18:42:15 crc kubenswrapper[4985]: I0128 18:42:15.207548 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"34d82dad-dc98-4c0f-90c2-0b25f7d73c01","Type":"ContainerStarted","Data":"e1d8d938a013e14e34718ea005c62adcdafbd122068babd1c11dc5a7c1422bf2"} Jan 28 18:42:15 crc kubenswrapper[4985]: I0128 18:42:15.210063 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"249a0e05-d210-402f-b7f8-2caf153346d8","Type":"ContainerStarted","Data":"0d9684f3d4336ae71b1f9fdea81d833a3ce461b76f547f6c936c89097d189168"} Jan 28 18:42:16 crc kubenswrapper[4985]: I0128 18:42:16.222483 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" event={"ID":"851ea22a-e43d-4d11-911a-3ec541e6012c","Type":"ContainerStarted","Data":"b3dcf3d6435bc5a5ddd83babff8c6655e9b838e1a714aff2a291f7cb27e62bf1"} Jan 28 18:42:16 crc kubenswrapper[4985]: I0128 18:42:16.222574 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:16 crc kubenswrapper[4985]: I0128 18:42:16.224166 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerStarted","Data":"93bb25f622215a35e032733b4664c5f7e5c37e8b8a11287fecbd4b3f644fd667"} Jan 28 18:42:16 crc kubenswrapper[4985]: I0128 18:42:16.244092 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" podStartSLOduration=7.244055308 podStartE2EDuration="7.244055308s" podCreationTimestamp="2026-01-28 18:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:42:16.241874967 +0000 UTC m=+1747.068437798" watchObservedRunningTime="2026-01-28 18:42:16.244055308 +0000 UTC m=+1747.070618129" Jan 28 18:42:17 crc kubenswrapper[4985]: I0128 18:42:17.237038 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerStarted","Data":"5a17d16c268530c17cf1806dfcce5123026714ba2b437c71a364b66d574ea617"} Jan 28 18:42:19 crc kubenswrapper[4985]: E0128 18:42:19.128528 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" Jan 28 18:42:19 crc kubenswrapper[4985]: E0128 18:42:19.271699 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" Jan 28 18:42:19 crc kubenswrapper[4985]: I0128 18:42:19.280747 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:42:19 crc kubenswrapper[4985]: I0128 18:42:19.281001 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerStarted","Data":"635d9dd27d70f1ccd27643b26e2e470fccf963c850c9c5557eaab5edb814ab6d"} Jan 28 18:42:20 crc kubenswrapper[4985]: E0128 18:42:20.283347 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" Jan 28 18:42:22 crc kubenswrapper[4985]: I0128 18:42:22.265104 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:42:22 crc kubenswrapper[4985]: E0128 18:42:22.265439 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:42:23 crc kubenswrapper[4985]: I0128 18:42:23.333989 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r7ml7" event={"ID":"627220be-fa5f-49a6-9c9e-b3ae5e49afec","Type":"ContainerStarted","Data":"48668effb10b8c0dfeaba93e4a156675d4c8985321775751a1f4f96f69975324"} Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.683306 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.719587 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-r7ml7" podStartSLOduration=4.079966297 podStartE2EDuration="41.719553989s" podCreationTimestamp="2026-01-28 18:41:43 +0000 UTC" firstStartedPulling="2026-01-28 18:41:44.81789176 +0000 UTC m=+1715.644454581" lastFinishedPulling="2026-01-28 18:42:22.457479422 +0000 UTC m=+1753.284042273" observedRunningTime="2026-01-28 18:42:23.349703037 +0000 UTC m=+1754.176265888" watchObservedRunningTime="2026-01-28 18:42:24.719553989 +0000 UTC m=+1755.546116860" Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.756939 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-mp4hr"] Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.757212 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="dnsmasq-dns" containerID="cri-o://8dde278f7ddf86385d1f8ef9bd55566ee7c04f535897d358bb08d0218ee0c419" gracePeriod=10 Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.962962 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-jqtwd"] Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.971668 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.982375 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-jqtwd"] Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.085628 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.085683 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.085711 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.085830 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.085966 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.086523 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtrfd\" (UniqueName: \"kubernetes.io/projected/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-kube-api-access-qtrfd\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.086700 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-config\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.157177 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.1:5353: connect: connection refused" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189362 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtrfd\" (UniqueName: \"kubernetes.io/projected/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-kube-api-access-qtrfd\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189464 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-config\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189550 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189576 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189593 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189620 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189657 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.190834 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.190835 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.190843 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.190874 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.190879 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-config\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.190947 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.210755 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtrfd\" (UniqueName: \"kubernetes.io/projected/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-kube-api-access-qtrfd\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.358423 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.358455 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" event={"ID":"f33e23a8-5c59-41b1-9afe-00977f966724","Type":"ContainerDied","Data":"8dde278f7ddf86385d1f8ef9bd55566ee7c04f535897d358bb08d0218ee0c419"} Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.358420 4985 generic.go:334] "Generic (PLEG): container finished" podID="f33e23a8-5c59-41b1-9afe-00977f966724" containerID="8dde278f7ddf86385d1f8ef9bd55566ee7c04f535897d358bb08d0218ee0c419" exitCode=0 Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.939994 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-jqtwd"] Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.206179 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.323114 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-svc\") pod \"f33e23a8-5c59-41b1-9afe-00977f966724\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.323360 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-nb\") pod \"f33e23a8-5c59-41b1-9afe-00977f966724\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.323448 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-swift-storage-0\") pod \"f33e23a8-5c59-41b1-9afe-00977f966724\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.323545 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-sb\") pod \"f33e23a8-5c59-41b1-9afe-00977f966724\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.323681 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-config\") pod \"f33e23a8-5c59-41b1-9afe-00977f966724\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.323725 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz55w\" (UniqueName: \"kubernetes.io/projected/f33e23a8-5c59-41b1-9afe-00977f966724-kube-api-access-qz55w\") pod \"f33e23a8-5c59-41b1-9afe-00977f966724\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.361718 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33e23a8-5c59-41b1-9afe-00977f966724-kube-api-access-qz55w" (OuterVolumeSpecName: "kube-api-access-qz55w") pod "f33e23a8-5c59-41b1-9afe-00977f966724" (UID: "f33e23a8-5c59-41b1-9afe-00977f966724"). InnerVolumeSpecName "kube-api-access-qz55w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.425488 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" event={"ID":"63ee6cb7-f768-47d8-a266-e1e6ca6926ea","Type":"ContainerStarted","Data":"985432ad861af76eae71821d9a1f34274f7a37efd03e3e7cfd07d428e40635ab"} Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.452657 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz55w\" (UniqueName: \"kubernetes.io/projected/f33e23a8-5c59-41b1-9afe-00977f966724-kube-api-access-qz55w\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.460755 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" event={"ID":"f33e23a8-5c59-41b1-9afe-00977f966724","Type":"ContainerDied","Data":"8a81f5a6bc9aeb4779fe5ba3167c9da81f9d6b2cee2d0a3316b0a2d07b8f7a9e"} Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.460810 4985 scope.go:117] "RemoveContainer" containerID="8dde278f7ddf86385d1f8ef9bd55566ee7c04f535897d358bb08d0218ee0c419" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.466451 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.500271 4985 scope.go:117] "RemoveContainer" containerID="fd29c92499411247c46e32f0f3619427bf7f15dbc9ff2205fbac7905d817aa90" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.511242 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f33e23a8-5c59-41b1-9afe-00977f966724" (UID: "f33e23a8-5c59-41b1-9afe-00977f966724"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.531824 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f33e23a8-5c59-41b1-9afe-00977f966724" (UID: "f33e23a8-5c59-41b1-9afe-00977f966724"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.542544 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f33e23a8-5c59-41b1-9afe-00977f966724" (UID: "f33e23a8-5c59-41b1-9afe-00977f966724"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.560321 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.560350 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.560359 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.563681 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f33e23a8-5c59-41b1-9afe-00977f966724" (UID: "f33e23a8-5c59-41b1-9afe-00977f966724"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.590174 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-config" (OuterVolumeSpecName: "config") pod "f33e23a8-5c59-41b1-9afe-00977f966724" (UID: "f33e23a8-5c59-41b1-9afe-00977f966724"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.662933 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.662966 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.805976 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-mp4hr"] Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.816453 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-mp4hr"] Jan 28 18:42:27 crc kubenswrapper[4985]: I0128 18:42:27.278935 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" path="/var/lib/kubelet/pods/f33e23a8-5c59-41b1-9afe-00977f966724/volumes" Jan 28 18:42:27 crc kubenswrapper[4985]: I0128 18:42:27.475607 4985 generic.go:334] "Generic (PLEG): container finished" podID="63ee6cb7-f768-47d8-a266-e1e6ca6926ea" containerID="53a1fab10c84910b7dae65cca8e794fd03ee543959c485919cd13d2287280a4a" exitCode=0 Jan 28 18:42:27 crc kubenswrapper[4985]: I0128 18:42:27.475694 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" event={"ID":"63ee6cb7-f768-47d8-a266-e1e6ca6926ea","Type":"ContainerDied","Data":"53a1fab10c84910b7dae65cca8e794fd03ee543959c485919cd13d2287280a4a"} Jan 28 18:42:28 crc kubenswrapper[4985]: I0128 18:42:28.491457 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" event={"ID":"63ee6cb7-f768-47d8-a266-e1e6ca6926ea","Type":"ContainerStarted","Data":"ef740412a8710735ab232783b3480fa853b94d1701dc6a4338aa95194f876a1e"} Jan 28 18:42:28 crc kubenswrapper[4985]: I0128 18:42:28.491797 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:28 crc kubenswrapper[4985]: I0128 18:42:28.494467 4985 generic.go:334] "Generic (PLEG): container finished" podID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" containerID="48668effb10b8c0dfeaba93e4a156675d4c8985321775751a1f4f96f69975324" exitCode=0 Jan 28 18:42:28 crc kubenswrapper[4985]: I0128 18:42:28.494569 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r7ml7" event={"ID":"627220be-fa5f-49a6-9c9e-b3ae5e49afec","Type":"ContainerDied","Data":"48668effb10b8c0dfeaba93e4a156675d4c8985321775751a1f4f96f69975324"} Jan 28 18:42:28 crc kubenswrapper[4985]: I0128 18:42:28.523746 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" podStartSLOduration=4.523724261 podStartE2EDuration="4.523724261s" podCreationTimestamp="2026-01-28 18:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:42:28.512885145 +0000 UTC m=+1759.339447966" watchObservedRunningTime="2026-01-28 18:42:28.523724261 +0000 UTC m=+1759.350287092" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.013209 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r7ml7" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.139049 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9vtx\" (UniqueName: \"kubernetes.io/projected/627220be-fa5f-49a6-9c9e-b3ae5e49afec-kube-api-access-r9vtx\") pod \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.141449 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-config-data\") pod \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.141703 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-combined-ca-bundle\") pod \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.186463 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/627220be-fa5f-49a6-9c9e-b3ae5e49afec-kube-api-access-r9vtx" (OuterVolumeSpecName: "kube-api-access-r9vtx") pod "627220be-fa5f-49a6-9c9e-b3ae5e49afec" (UID: "627220be-fa5f-49a6-9c9e-b3ae5e49afec"). InnerVolumeSpecName "kube-api-access-r9vtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.244371 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9vtx\" (UniqueName: \"kubernetes.io/projected/627220be-fa5f-49a6-9c9e-b3ae5e49afec-kube-api-access-r9vtx\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.249718 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "627220be-fa5f-49a6-9c9e-b3ae5e49afec" (UID: "627220be-fa5f-49a6-9c9e-b3ae5e49afec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.287135 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-config-data" (OuterVolumeSpecName: "config-data") pod "627220be-fa5f-49a6-9c9e-b3ae5e49afec" (UID: "627220be-fa5f-49a6-9c9e-b3ae5e49afec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.348222 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.348339 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.516168 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r7ml7" event={"ID":"627220be-fa5f-49a6-9c9e-b3ae5e49afec","Type":"ContainerDied","Data":"319bf1dcb8102c51957853cf08d45a01f4387e66993d72cad23092e9e3dddb4f"} Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.516488 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319bf1dcb8102c51957853cf08d45a01f4387e66993d72cad23092e9e3dddb4f" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.516205 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r7ml7" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.461135 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5df4f6c8f9-fvvqb"] Jan 28 18:42:31 crc kubenswrapper[4985]: E0128 18:42:31.462715 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" containerName="heat-db-sync" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.462805 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" containerName="heat-db-sync" Jan 28 18:42:31 crc kubenswrapper[4985]: E0128 18:42:31.462892 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="dnsmasq-dns" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.462982 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="dnsmasq-dns" Jan 28 18:42:31 crc kubenswrapper[4985]: E0128 18:42:31.463083 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="init" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.463138 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="init" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.463427 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="dnsmasq-dns" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.463544 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" containerName="heat-db-sync" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.464569 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.474770 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-config-data\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.474865 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-config-data-custom\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.474984 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-combined-ca-bundle\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.475000 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d2gw\" (UniqueName: \"kubernetes.io/projected/45d84233-dc44-4b3c-8aaa-f08ab50c0512-kube-api-access-4d2gw\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.481640 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5df4f6c8f9-fvvqb"] Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.571286 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-9d696c4dd-qgm9g"] Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.574843 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.588429 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-combined-ca-bundle\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.588502 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bthlv\" (UniqueName: \"kubernetes.io/projected/f91275ab-50ad-4d69-953f-764ccd276927-kube-api-access-bthlv\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.588534 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-config-data-custom\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.588667 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-config-data\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.588719 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-config-data-custom\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.592360 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-9d696c4dd-qgm9g"] Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.594679 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-internal-tls-certs\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.598367 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-config-data\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.598628 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-config-data\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.598779 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-combined-ca-bundle\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.598800 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d2gw\" (UniqueName: \"kubernetes.io/projected/45d84233-dc44-4b3c-8aaa-f08ab50c0512-kube-api-access-4d2gw\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.598860 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-public-tls-certs\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.604056 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-combined-ca-bundle\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.604685 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-config-data-custom\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.607825 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-76b7548687-cmjrr"] Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.614535 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.635942 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-76b7548687-cmjrr"] Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.654866 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d2gw\" (UniqueName: \"kubernetes.io/projected/45d84233-dc44-4b3c-8aaa-f08ab50c0512-kube-api-access-4d2gw\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.701603 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-combined-ca-bundle\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.701675 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-internal-tls-certs\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.701732 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-config-data\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702009 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-internal-tls-certs\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702167 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-config-data\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702341 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-public-tls-certs\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702664 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-public-tls-certs\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702719 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw82v\" (UniqueName: \"kubernetes.io/projected/c761ae73-94d1-46be-afe6-1232e2c589ff-kube-api-access-hw82v\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702812 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-combined-ca-bundle\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702890 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-config-data-custom\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702938 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bthlv\" (UniqueName: \"kubernetes.io/projected/f91275ab-50ad-4d69-953f-764ccd276927-kube-api-access-bthlv\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.703021 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-config-data-custom\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.706586 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-combined-ca-bundle\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.706641 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-config-data-custom\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.706732 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-internal-tls-certs\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.707935 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-public-tls-certs\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.709077 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-config-data\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.720125 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bthlv\" (UniqueName: \"kubernetes.io/projected/f91275ab-50ad-4d69-953f-764ccd276927-kube-api-access-bthlv\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.786947 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.795878 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.805113 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-public-tls-certs\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.805183 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw82v\" (UniqueName: \"kubernetes.io/projected/c761ae73-94d1-46be-afe6-1232e2c589ff-kube-api-access-hw82v\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.805228 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-config-data-custom\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.805277 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-combined-ca-bundle\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.805305 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-internal-tls-certs\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.805341 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-config-data\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.810046 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-config-data-custom\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.810108 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-internal-tls-certs\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.811402 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-public-tls-certs\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.813183 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-combined-ca-bundle\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.822182 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-config-data\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.833543 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw82v\" (UniqueName: \"kubernetes.io/projected/c761ae73-94d1-46be-afe6-1232e2c589ff-kube-api-access-hw82v\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.943371 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.210193 4985 scope.go:117] "RemoveContainer" containerID="00c5bac74e2813b5c78c4d3d883b158530767718be83285d64f4742a35e64806" Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.249605 4985 scope.go:117] "RemoveContainer" containerID="d6979a9489721d74b8d4664bdfe5df656096756724de155696b85d31e7a0e2dd" Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.276195 4985 scope.go:117] "RemoveContainer" containerID="e1a1c6117167cd879db9ae5539bf348a54302f9007388acd00fd5041acda647f" Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.314669 4985 scope.go:117] "RemoveContainer" containerID="2a94f1b22150bff413a35eb8a3eed5745a2369fd30defeeb03ec8e8bb54d93e7" Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.372632 4985 scope.go:117] "RemoveContainer" containerID="e79b0c26c13e421f90b1e346a7a6ed37fdf036d779d67dcae2b50acce53ce0c6" Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.376560 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-9d696c4dd-qgm9g"] Jan 28 18:42:32 crc kubenswrapper[4985]: W0128 18:42:32.383896 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf91275ab_50ad_4d69_953f_764ccd276927.slice/crio-1e84d2fdfda9eb21570c20068d0645bc7c30a765bde3ef192c7c127a0c127446 WatchSource:0}: Error finding container 1e84d2fdfda9eb21570c20068d0645bc7c30a765bde3ef192c7c127a0c127446: Status 404 returned error can't find the container with id 1e84d2fdfda9eb21570c20068d0645bc7c30a765bde3ef192c7c127a0c127446 Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.482269 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5df4f6c8f9-fvvqb"] Jan 28 18:42:32 crc kubenswrapper[4985]: W0128 18:42:32.491409 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45d84233_dc44_4b3c_8aaa_f08ab50c0512.slice/crio-f51930feb0bfbbeb832121e9e4781216b8bbecb150c7970083fc5b65973beb69 WatchSource:0}: Error finding container f51930feb0bfbbeb832121e9e4781216b8bbecb150c7970083fc5b65973beb69: Status 404 returned error can't find the container with id f51930feb0bfbbeb832121e9e4781216b8bbecb150c7970083fc5b65973beb69 Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.577529 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" event={"ID":"45d84233-dc44-4b3c-8aaa-f08ab50c0512","Type":"ContainerStarted","Data":"f51930feb0bfbbeb832121e9e4781216b8bbecb150c7970083fc5b65973beb69"} Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.580443 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-9d696c4dd-qgm9g" event={"ID":"f91275ab-50ad-4d69-953f-764ccd276927","Type":"ContainerStarted","Data":"1e84d2fdfda9eb21570c20068d0645bc7c30a765bde3ef192c7c127a0c127446"} Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.616995 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-76b7548687-cmjrr"] Jan 28 18:42:32 crc kubenswrapper[4985]: W0128 18:42:32.640870 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc761ae73_94d1_46be_afe6_1232e2c589ff.slice/crio-4b3bf40734b089e34a337426f34fc284909961f1818e329a90c896087898df64 WatchSource:0}: Error finding container 4b3bf40734b089e34a337426f34fc284909961f1818e329a90c896087898df64: Status 404 returned error can't find the container with id 4b3bf40734b089e34a337426f34fc284909961f1818e329a90c896087898df64 Jan 28 18:42:33 crc kubenswrapper[4985]: I0128 18:42:33.432180 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 18:42:33 crc kubenswrapper[4985]: I0128 18:42:33.598555 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" event={"ID":"45d84233-dc44-4b3c-8aaa-f08ab50c0512","Type":"ContainerStarted","Data":"16d7bbbf380aa65bd61b4ca60ba79649324b3433bb594ef93b14cb608ada2e9e"} Jan 28 18:42:33 crc kubenswrapper[4985]: I0128 18:42:33.598623 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:33 crc kubenswrapper[4985]: I0128 18:42:33.601280 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-76b7548687-cmjrr" event={"ID":"c761ae73-94d1-46be-afe6-1232e2c589ff","Type":"ContainerStarted","Data":"4b3bf40734b089e34a337426f34fc284909961f1818e329a90c896087898df64"} Jan 28 18:42:33 crc kubenswrapper[4985]: I0128 18:42:33.619016 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" podStartSLOduration=2.61899776 podStartE2EDuration="2.61899776s" podCreationTimestamp="2026-01-28 18:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:42:33.617532909 +0000 UTC m=+1764.444095730" watchObservedRunningTime="2026-01-28 18:42:33.61899776 +0000 UTC m=+1764.445560581" Jan 28 18:42:35 crc kubenswrapper[4985]: I0128 18:42:35.263866 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:42:35 crc kubenswrapper[4985]: E0128 18:42:35.264653 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:42:35 crc kubenswrapper[4985]: I0128 18:42:35.360452 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:35 crc kubenswrapper[4985]: I0128 18:42:35.429724 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-h8w5d"] Jan 28 18:42:35 crc kubenswrapper[4985]: I0128 18:42:35.430020 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="dnsmasq-dns" containerID="cri-o://b3dcf3d6435bc5a5ddd83babff8c6655e9b838e1a714aff2a291f7cb27e62bf1" gracePeriod=10 Jan 28 18:42:36 crc kubenswrapper[4985]: I0128 18:42:36.638358 4985 generic.go:334] "Generic (PLEG): container finished" podID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerID="b3dcf3d6435bc5a5ddd83babff8c6655e9b838e1a714aff2a291f7cb27e62bf1" exitCode=0 Jan 28 18:42:36 crc kubenswrapper[4985]: I0128 18:42:36.638438 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" event={"ID":"851ea22a-e43d-4d11-911a-3ec541e6012c","Type":"ContainerDied","Data":"b3dcf3d6435bc5a5ddd83babff8c6655e9b838e1a714aff2a291f7cb27e62bf1"} Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.835498 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941005 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-svc\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941433 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdqmc\" (UniqueName: \"kubernetes.io/projected/851ea22a-e43d-4d11-911a-3ec541e6012c-kube-api-access-tdqmc\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941558 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-openstack-edpm-ipam\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941596 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-sb\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941714 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-config\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941786 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-swift-storage-0\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941823 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-nb\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.946617 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851ea22a-e43d-4d11-911a-3ec541e6012c-kube-api-access-tdqmc" (OuterVolumeSpecName: "kube-api-access-tdqmc") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "kube-api-access-tdqmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.006330 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.012893 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.014433 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-config" (OuterVolumeSpecName: "config") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.016462 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.024797 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.028946 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044868 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044908 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044919 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044928 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044937 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdqmc\" (UniqueName: \"kubernetes.io/projected/851ea22a-e43d-4d11-911a-3ec541e6012c-kube-api-access-tdqmc\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044944 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044954 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.693136 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" event={"ID":"851ea22a-e43d-4d11-911a-3ec541e6012c","Type":"ContainerDied","Data":"da98239627d3370ef27352d22f95238ce0d007f495ebc106572103880ba5c81e"} Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.693190 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.693533 4985 scope.go:117] "RemoveContainer" containerID="b3dcf3d6435bc5a5ddd83babff8c6655e9b838e1a714aff2a291f7cb27e62bf1" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.728521 4985 scope.go:117] "RemoveContainer" containerID="eb06142e49a896d0f59b1509119df8e1b80f5b08d70235e7d7d845632e5598ca" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.904951 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-h8w5d"] Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.917718 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-h8w5d"] Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.277959 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" path="/var/lib/kubelet/pods/851ea22a-e43d-4d11-911a-3ec541e6012c/volumes" Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.710325 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerStarted","Data":"c6e66f05a0d16e3fe2371e96f9a7cf894276603fbbf1aac905bd7a1b74d22b3b"} Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.717897 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-9d696c4dd-qgm9g" event={"ID":"f91275ab-50ad-4d69-953f-764ccd276927","Type":"ContainerStarted","Data":"6203296a26a2c0a12ed531e57f672d48f72672c1daf4b6cc8e1eddd5624419f3"} Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.718945 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.721046 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-76b7548687-cmjrr" event={"ID":"c761ae73-94d1-46be-afe6-1232e2c589ff","Type":"ContainerStarted","Data":"ad10a5387e49bec4b95c22f76fa4f6f5cc81171c5d425cf4b816d1158ff80871"} Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.721771 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.753015 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.16369897 podStartE2EDuration="52.752993057s" podCreationTimestamp="2026-01-28 18:41:49 +0000 UTC" firstStartedPulling="2026-01-28 18:41:50.890097276 +0000 UTC m=+1721.716660097" lastFinishedPulling="2026-01-28 18:42:40.479391363 +0000 UTC m=+1771.305954184" observedRunningTime="2026-01-28 18:42:41.737362836 +0000 UTC m=+1772.563925657" watchObservedRunningTime="2026-01-28 18:42:41.752993057 +0000 UTC m=+1772.579555878" Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.789384 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-76b7548687-cmjrr" podStartSLOduration=2.975901329 podStartE2EDuration="10.789364874s" podCreationTimestamp="2026-01-28 18:42:31 +0000 UTC" firstStartedPulling="2026-01-28 18:42:32.660457914 +0000 UTC m=+1763.487020725" lastFinishedPulling="2026-01-28 18:42:40.473921449 +0000 UTC m=+1771.300484270" observedRunningTime="2026-01-28 18:42:41.777003045 +0000 UTC m=+1772.603565876" watchObservedRunningTime="2026-01-28 18:42:41.789364874 +0000 UTC m=+1772.615927695" Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.832424 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-9d696c4dd-qgm9g" podStartSLOduration=2.744886996 podStartE2EDuration="10.83240351s" podCreationTimestamp="2026-01-28 18:42:31 +0000 UTC" firstStartedPulling="2026-01-28 18:42:32.386614511 +0000 UTC m=+1763.213177332" lastFinishedPulling="2026-01-28 18:42:40.474131025 +0000 UTC m=+1771.300693846" observedRunningTime="2026-01-28 18:42:41.808897526 +0000 UTC m=+1772.635460347" watchObservedRunningTime="2026-01-28 18:42:41.83240351 +0000 UTC m=+1772.658966331" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.682290 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.14:5353: i/o timeout" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.860362 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk"] Jan 28 18:42:44 crc kubenswrapper[4985]: E0128 18:42:44.860829 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="dnsmasq-dns" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.860847 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="dnsmasq-dns" Jan 28 18:42:44 crc kubenswrapper[4985]: E0128 18:42:44.860864 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="init" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.860870 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="init" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.861115 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="dnsmasq-dns" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.862338 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.864207 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.864476 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.864718 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.865151 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.880410 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk"] Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.991537 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.991820 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.991925 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.992076 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6897\" (UniqueName: \"kubernetes.io/projected/7a5d3484-2192-44a6-b632-5a683af945d6-kube-api-access-h6897\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.094046 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.094113 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.094144 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.094221 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6897\" (UniqueName: \"kubernetes.io/projected/7a5d3484-2192-44a6-b632-5a683af945d6-kube-api-access-h6897\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.124425 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.125717 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.126151 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.128845 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6897\" (UniqueName: \"kubernetes.io/projected/7a5d3484-2192-44a6-b632-5a683af945d6-kube-api-access-h6897\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.189107 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:46 crc kubenswrapper[4985]: I0128 18:42:46.264491 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:42:46 crc kubenswrapper[4985]: E0128 18:42:46.265496 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:42:46 crc kubenswrapper[4985]: I0128 18:42:46.876805 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk"] Jan 28 18:42:47 crc kubenswrapper[4985]: I0128 18:42:47.799530 4985 generic.go:334] "Generic (PLEG): container finished" podID="249a0e05-d210-402f-b7f8-2caf153346d8" containerID="0d9684f3d4336ae71b1f9fdea81d833a3ce461b76f547f6c936c89097d189168" exitCode=0 Jan 28 18:42:47 crc kubenswrapper[4985]: I0128 18:42:47.799608 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"249a0e05-d210-402f-b7f8-2caf153346d8","Type":"ContainerDied","Data":"0d9684f3d4336ae71b1f9fdea81d833a3ce461b76f547f6c936c89097d189168"} Jan 28 18:42:47 crc kubenswrapper[4985]: I0128 18:42:47.802441 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" event={"ID":"7a5d3484-2192-44a6-b632-5a683af945d6","Type":"ContainerStarted","Data":"7c7a4afd6d6cdbdaa13f82b8cf1f686b4e15c7a50303b642026bcbf65746941e"} Jan 28 18:42:47 crc kubenswrapper[4985]: I0128 18:42:47.805343 4985 generic.go:334] "Generic (PLEG): container finished" podID="34d82dad-dc98-4c0f-90c2-0b25f7d73c01" containerID="e1d8d938a013e14e34718ea005c62adcdafbd122068babd1c11dc5a7c1422bf2" exitCode=0 Jan 28 18:42:47 crc kubenswrapper[4985]: I0128 18:42:47.805383 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"34d82dad-dc98-4c0f-90c2-0b25f7d73c01","Type":"ContainerDied","Data":"e1d8d938a013e14e34718ea005c62adcdafbd122068babd1c11dc5a7c1422bf2"} Jan 28 18:42:49 crc kubenswrapper[4985]: I0128 18:42:49.839001 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"34d82dad-dc98-4c0f-90c2-0b25f7d73c01","Type":"ContainerStarted","Data":"070f57a18fdf2335b2c740c37fb18af687ed8b76af622c39d8ddd22e8fd2e739"} Jan 28 18:42:49 crc kubenswrapper[4985]: I0128 18:42:49.843023 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"249a0e05-d210-402f-b7f8-2caf153346d8","Type":"ContainerStarted","Data":"4d6bbe15fc0df126779e519f528cf5aa83fcff2224b5d45454ef6fbcd9ad0297"} Jan 28 18:42:50 crc kubenswrapper[4985]: I0128 18:42:50.853892 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:50 crc kubenswrapper[4985]: I0128 18:42:50.854748 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 28 18:42:50 crc kubenswrapper[4985]: I0128 18:42:50.877793 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=48.877774912 podStartE2EDuration="48.877774912s" podCreationTimestamp="2026-01-28 18:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:42:50.876034132 +0000 UTC m=+1781.702596993" watchObservedRunningTime="2026-01-28 18:42:50.877774912 +0000 UTC m=+1781.704337743" Jan 28 18:42:50 crc kubenswrapper[4985]: I0128 18:42:50.910878 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=48.910861146 podStartE2EDuration="48.910861146s" podCreationTimestamp="2026-01-28 18:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:42:50.898847717 +0000 UTC m=+1781.725410548" watchObservedRunningTime="2026-01-28 18:42:50.910861146 +0000 UTC m=+1781.737423957" Jan 28 18:42:51 crc kubenswrapper[4985]: I0128 18:42:51.949855 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:52 crc kubenswrapper[4985]: I0128 18:42:52.015164 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-54bf646c6-b6zb2"] Jan 28 18:42:52 crc kubenswrapper[4985]: I0128 18:42:52.015386 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-54bf646c6-b6zb2" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" containerName="heat-engine" containerID="cri-o://c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" gracePeriod=60 Jan 28 18:42:56 crc kubenswrapper[4985]: I0128 18:42:56.811622 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-9d696c4dd-qgm9g" podUID="f91275ab-50ad-4d69-953f-764ccd276927" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.17:8004/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:42:56 crc kubenswrapper[4985]: I0128 18:42:56.811660 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-9d696c4dd-qgm9g" podUID="f91275ab-50ad-4d69-953f-764ccd276927" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.17:8004/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:42:56 crc kubenswrapper[4985]: I0128 18:42:56.957053 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-76b7548687-cmjrr" podUID="c761ae73-94d1-46be-afe6-1232e2c589ff" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.18:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:42:56 crc kubenswrapper[4985]: I0128 18:42:56.957525 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-76b7548687-cmjrr" podUID="c761ae73-94d1-46be-afe6-1232e2c589ff" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.18:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:42:57 crc kubenswrapper[4985]: I0128 18:42:57.264500 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:42:57 crc kubenswrapper[4985]: E0128 18:42:57.265171 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:43:00 crc kubenswrapper[4985]: E0128 18:43:00.787016 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:43:00 crc kubenswrapper[4985]: E0128 18:43:00.789274 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:43:00 crc kubenswrapper[4985]: E0128 18:43:00.790773 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:43:00 crc kubenswrapper[4985]: E0128 18:43:00.790840 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-54bf646c6-b6zb2" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" containerName="heat-engine" Jan 28 18:43:00 crc kubenswrapper[4985]: I0128 18:43:00.987407 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:43:00 crc kubenswrapper[4985]: I0128 18:43:00.989211 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:43:01 crc kubenswrapper[4985]: I0128 18:43:01.088991 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-db4c676cd-xbwzr"] Jan 28 18:43:01 crc kubenswrapper[4985]: I0128 18:43:01.089318 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" containerID="cri-o://ff2e4ede92f22c252052c669b18beaa2f7fba2ec3c7930654e6336cf8415f433" gracePeriod=60 Jan 28 18:43:01 crc kubenswrapper[4985]: I0128 18:43:01.109950 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78f74b8b49-ngj6j"] Jan 28 18:43:01 crc kubenswrapper[4985]: I0128 18:43:01.110399 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-78f74b8b49-ngj6j" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" containerID="cri-o://df4c3bf440a91085353fe1dff162d3bc31eb707fce7be15716ee9580c55e1195" gracePeriod=60 Jan 28 18:43:02 crc kubenswrapper[4985]: I0128 18:43:02.512405 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="34d82dad-dc98-4c0f-90c2-0b25f7d73c01" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.12:5671: connect: connection refused" Jan 28 18:43:02 crc kubenswrapper[4985]: I0128 18:43:02.713220 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="249a0e05-d210-402f-b7f8-2caf153346d8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.13:5671: connect: connection refused" Jan 28 18:43:04 crc kubenswrapper[4985]: I0128 18:43:04.773502 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.222:8000/healthcheck\": read tcp 10.217.0.2:59510->10.217.0.222:8000: read: connection reset by peer" Jan 28 18:43:04 crc kubenswrapper[4985]: I0128 18:43:04.785900 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-78f74b8b49-ngj6j" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.221:8004/healthcheck\": read tcp 10.217.0.2:43948->10.217.0.221:8004: read: connection reset by peer" Jan 28 18:43:05 crc kubenswrapper[4985]: I0128 18:43:05.506315 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-hgpsv"] Jan 28 18:43:05 crc kubenswrapper[4985]: I0128 18:43:05.520394 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-hgpsv"] Jan 28 18:43:05 crc kubenswrapper[4985]: I0128 18:43:05.919077 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-6bqfv"] Jan 28 18:43:05 crc kubenswrapper[4985]: I0128 18:43:05.920883 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:05 crc kubenswrapper[4985]: I0128 18:43:05.924758 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:43:05 crc kubenswrapper[4985]: I0128 18:43:05.936303 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6bqfv"] Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.029628 4985 generic.go:334] "Generic (PLEG): container finished" podID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerID="ff2e4ede92f22c252052c669b18beaa2f7fba2ec3c7930654e6336cf8415f433" exitCode=0 Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.029717 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" event={"ID":"f0c2a92a-343c-42fa-a740-8bb10701d271","Type":"ContainerDied","Data":"ff2e4ede92f22c252052c669b18beaa2f7fba2ec3c7930654e6336cf8415f433"} Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.031692 4985 generic.go:334] "Generic (PLEG): container finished" podID="261340dd-15fd-43d9-8db3-3de095d8728a" containerID="df4c3bf440a91085353fe1dff162d3bc31eb707fce7be15716ee9580c55e1195" exitCode=0 Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.031734 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f74b8b49-ngj6j" event={"ID":"261340dd-15fd-43d9-8db3-3de095d8728a","Type":"ContainerDied","Data":"df4c3bf440a91085353fe1dff162d3bc31eb707fce7be15716ee9580c55e1195"} Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.081121 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmkqr\" (UniqueName: \"kubernetes.io/projected/d276e0b0-f662-443c-a126-003ee44287c8-kube-api-access-fmkqr\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.081295 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-scripts\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.081396 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-config-data\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.081460 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-combined-ca-bundle\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.184635 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmkqr\" (UniqueName: \"kubernetes.io/projected/d276e0b0-f662-443c-a126-003ee44287c8-kube-api-access-fmkqr\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.185006 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-scripts\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.185085 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-config-data\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.185139 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-combined-ca-bundle\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.199715 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-scripts\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.199829 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-combined-ca-bundle\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.200115 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-config-data\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.201620 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmkqr\" (UniqueName: \"kubernetes.io/projected/d276e0b0-f662-443c-a126-003ee44287c8-kube-api-access-fmkqr\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.294821 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:07 crc kubenswrapper[4985]: I0128 18:43:07.560737 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7decce21-e84c-4501-bf0d-ca01387c51ee" path="/var/lib/kubelet/pods/7decce21-e84c-4501-bf0d-ca01387c51ee/volumes" Jan 28 18:43:08 crc kubenswrapper[4985]: I0128 18:43:08.385501 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-78f74b8b49-ngj6j" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.221:8004/healthcheck\": dial tcp 10.217.0.221:8004: connect: connection refused" Jan 28 18:43:08 crc kubenswrapper[4985]: I0128 18:43:08.385567 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.222:8000/healthcheck\": dial tcp 10.217.0.222:8000: connect: connection refused" Jan 28 18:43:10 crc kubenswrapper[4985]: I0128 18:43:10.085192 4985 generic.go:334] "Generic (PLEG): container finished" podID="a907310b-926c-4b8e-b3db-b8a43844891c" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" exitCode=0 Jan 28 18:43:10 crc kubenswrapper[4985]: I0128 18:43:10.085307 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54bf646c6-b6zb2" event={"ID":"a907310b-926c-4b8e-b3db-b8a43844891c","Type":"ContainerDied","Data":"c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321"} Jan 28 18:43:10 crc kubenswrapper[4985]: I0128 18:43:10.265127 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:43:10 crc kubenswrapper[4985]: E0128 18:43:10.265885 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:43:10 crc kubenswrapper[4985]: E0128 18:43:10.785968 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321 is running failed: container process not found" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:43:10 crc kubenswrapper[4985]: E0128 18:43:10.786220 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321 is running failed: container process not found" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:43:10 crc kubenswrapper[4985]: E0128 18:43:10.788643 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321 is running failed: container process not found" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:43:10 crc kubenswrapper[4985]: E0128 18:43:10.788684 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-54bf646c6-b6zb2" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" containerName="heat-engine" Jan 28 18:43:12 crc kubenswrapper[4985]: I0128 18:43:12.502304 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="34d82dad-dc98-4c0f-90c2-0b25f7d73c01" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.12:5671: connect: connection refused" Jan 28 18:43:12 crc kubenswrapper[4985]: I0128 18:43:12.712425 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="249a0e05-d210-402f-b7f8-2caf153346d8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.13:5671: connect: connection refused" Jan 28 18:43:13 crc kubenswrapper[4985]: I0128 18:43:13.385725 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-78f74b8b49-ngj6j" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.221:8004/healthcheck\": dial tcp 10.217.0.221:8004: connect: connection refused" Jan 28 18:43:13 crc kubenswrapper[4985]: I0128 18:43:13.385853 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:43:13 crc kubenswrapper[4985]: I0128 18:43:13.386322 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.222:8000/healthcheck\": dial tcp 10.217.0.222:8000: connect: connection refused" Jan 28 18:43:13 crc kubenswrapper[4985]: I0128 18:43:13.386543 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:43:16 crc kubenswrapper[4985]: E0128 18:43:16.755776 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 28 18:43:16 crc kubenswrapper[4985]: E0128 18:43:16.756568 4985 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 18:43:16 crc kubenswrapper[4985]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 28 18:43:16 crc kubenswrapper[4985]: - hosts: all Jan 28 18:43:16 crc kubenswrapper[4985]: strategy: linear Jan 28 18:43:16 crc kubenswrapper[4985]: tasks: Jan 28 18:43:16 crc kubenswrapper[4985]: - name: Enable podified-repos Jan 28 18:43:16 crc kubenswrapper[4985]: become: true Jan 28 18:43:16 crc kubenswrapper[4985]: ansible.builtin.shell: | Jan 28 18:43:16 crc kubenswrapper[4985]: set -euxo pipefail Jan 28 18:43:16 crc kubenswrapper[4985]: pushd /var/tmp Jan 28 18:43:16 crc kubenswrapper[4985]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 28 18:43:16 crc kubenswrapper[4985]: pushd repo-setup-main Jan 28 18:43:16 crc kubenswrapper[4985]: python3 -m venv ./venv Jan 28 18:43:16 crc kubenswrapper[4985]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 28 18:43:16 crc kubenswrapper[4985]: ./venv/bin/repo-setup current-podified -b antelope Jan 28 18:43:16 crc kubenswrapper[4985]: popd Jan 28 18:43:16 crc kubenswrapper[4985]: rm -rf repo-setup-main Jan 28 18:43:16 crc kubenswrapper[4985]: Jan 28 18:43:16 crc kubenswrapper[4985]: Jan 28 18:43:16 crc kubenswrapper[4985]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 28 18:43:16 crc kubenswrapper[4985]: edpm_override_hosts: openstack-edpm-ipam Jan 28 18:43:16 crc kubenswrapper[4985]: edpm_service_type: repo-setup Jan 28 18:43:16 crc kubenswrapper[4985]: Jan 28 18:43:16 crc kubenswrapper[4985]: Jan 28 18:43:16 crc kubenswrapper[4985]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6897,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk_openstack(7a5d3484-2192-44a6-b632-5a683af945d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 28 18:43:16 crc kubenswrapper[4985]: > logger="UnhandledError" Jan 28 18:43:16 crc kubenswrapper[4985]: E0128 18:43:16.758019 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" podUID="7a5d3484-2192-44a6-b632-5a683af945d6" Jan 28 18:43:17 crc kubenswrapper[4985]: E0128 18:43:17.204588 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" podUID="7a5d3484-2192-44a6-b632-5a683af945d6" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.522330 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6bqfv"] Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.665525 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.680612 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.707703 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820503 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnf9z\" (UniqueName: \"kubernetes.io/projected/261340dd-15fd-43d9-8db3-3de095d8728a-kube-api-access-jnf9z\") pod \"261340dd-15fd-43d9-8db3-3de095d8728a\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820560 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kccj\" (UniqueName: \"kubernetes.io/projected/f0c2a92a-343c-42fa-a740-8bb10701d271-kube-api-access-7kccj\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820611 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data\") pod \"261340dd-15fd-43d9-8db3-3de095d8728a\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820639 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data-custom\") pod \"261340dd-15fd-43d9-8db3-3de095d8728a\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820759 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxzqd\" (UniqueName: \"kubernetes.io/projected/a907310b-926c-4b8e-b3db-b8a43844891c-kube-api-access-sxzqd\") pod \"a907310b-926c-4b8e-b3db-b8a43844891c\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820809 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820871 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-combined-ca-bundle\") pod \"261340dd-15fd-43d9-8db3-3de095d8728a\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820897 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-internal-tls-certs\") pod \"261340dd-15fd-43d9-8db3-3de095d8728a\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820929 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-public-tls-certs\") pod \"261340dd-15fd-43d9-8db3-3de095d8728a\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820985 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-public-tls-certs\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.821004 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data-custom\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.821107 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data\") pod \"a907310b-926c-4b8e-b3db-b8a43844891c\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.821123 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data-custom\") pod \"a907310b-926c-4b8e-b3db-b8a43844891c\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.821146 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.821170 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.821217 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-combined-ca-bundle\") pod \"a907310b-926c-4b8e-b3db-b8a43844891c\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.848018 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c2a92a-343c-42fa-a740-8bb10701d271-kube-api-access-7kccj" (OuterVolumeSpecName: "kube-api-access-7kccj") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "kube-api-access-7kccj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.850674 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.850736 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "261340dd-15fd-43d9-8db3-3de095d8728a" (UID: "261340dd-15fd-43d9-8db3-3de095d8728a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.852032 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261340dd-15fd-43d9-8db3-3de095d8728a-kube-api-access-jnf9z" (OuterVolumeSpecName: "kube-api-access-jnf9z") pod "261340dd-15fd-43d9-8db3-3de095d8728a" (UID: "261340dd-15fd-43d9-8db3-3de095d8728a"). InnerVolumeSpecName "kube-api-access-jnf9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.853063 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a907310b-926c-4b8e-b3db-b8a43844891c-kube-api-access-sxzqd" (OuterVolumeSpecName: "kube-api-access-sxzqd") pod "a907310b-926c-4b8e-b3db-b8a43844891c" (UID: "a907310b-926c-4b8e-b3db-b8a43844891c"). InnerVolumeSpecName "kube-api-access-sxzqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.874534 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a907310b-926c-4b8e-b3db-b8a43844891c" (UID: "a907310b-926c-4b8e-b3db-b8a43844891c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.891222 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a907310b-926c-4b8e-b3db-b8a43844891c" (UID: "a907310b-926c-4b8e-b3db-b8a43844891c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.899448 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "261340dd-15fd-43d9-8db3-3de095d8728a" (UID: "261340dd-15fd-43d9-8db3-3de095d8728a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.922038 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "261340dd-15fd-43d9-8db3-3de095d8728a" (UID: "261340dd-15fd-43d9-8db3-3de095d8728a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923866 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxzqd\" (UniqueName: \"kubernetes.io/projected/a907310b-926c-4b8e-b3db-b8a43844891c-kube-api-access-sxzqd\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923899 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923912 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923923 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923935 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923946 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923957 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnf9z\" (UniqueName: \"kubernetes.io/projected/261340dd-15fd-43d9-8db3-3de095d8728a-kube-api-access-jnf9z\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923968 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kccj\" (UniqueName: \"kubernetes.io/projected/f0c2a92a-343c-42fa-a740-8bb10701d271-kube-api-access-7kccj\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923978 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.938531 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data" (OuterVolumeSpecName: "config-data") pod "261340dd-15fd-43d9-8db3-3de095d8728a" (UID: "261340dd-15fd-43d9-8db3-3de095d8728a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.951119 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "261340dd-15fd-43d9-8db3-3de095d8728a" (UID: "261340dd-15fd-43d9-8db3-3de095d8728a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.960560 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data" (OuterVolumeSpecName: "config-data") pod "a907310b-926c-4b8e-b3db-b8a43844891c" (UID: "a907310b-926c-4b8e-b3db-b8a43844891c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.025478 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.025575 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.026525 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.026645 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data" (OuterVolumeSpecName: "config-data") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.026688 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:18 crc kubenswrapper[4985]: W0128 18:43:18.026725 4985 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f0c2a92a-343c-42fa-a740-8bb10701d271/volumes/kubernetes.io~secret/combined-ca-bundle Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.026734 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.026737 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:18 crc kubenswrapper[4985]: W0128 18:43:18.026888 4985 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f0c2a92a-343c-42fa-a740-8bb10701d271/volumes/kubernetes.io~secret/internal-tls-certs Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.026902 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027774 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027799 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027811 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027822 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027831 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027839 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027847 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.235554 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f74b8b49-ngj6j" event={"ID":"261340dd-15fd-43d9-8db3-3de095d8728a","Type":"ContainerDied","Data":"21398e04f7c58bcaa01a9d450633b9dd30bf48b5e1dde83202d275ec2b22003a"} Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.235846 4985 scope.go:117] "RemoveContainer" containerID="df4c3bf440a91085353fe1dff162d3bc31eb707fce7be15716ee9580c55e1195" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.235997 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.269483 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" event={"ID":"f0c2a92a-343c-42fa-a740-8bb10701d271","Type":"ContainerDied","Data":"949f1904b14ba2cbd62ce6062414ba4496f2a1480543442a29b61571a29497fd"} Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.269548 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.286510 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6bqfv" event={"ID":"d276e0b0-f662-443c-a126-003ee44287c8","Type":"ContainerStarted","Data":"ecdfc8afa4f2b868f84dc5832f39a80a33774a8c5d26cccc6c2784958c84b2cf"} Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.306763 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.307031 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78f74b8b49-ngj6j"] Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.307075 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54bf646c6-b6zb2" event={"ID":"a907310b-926c-4b8e-b3db-b8a43844891c","Type":"ContainerDied","Data":"c2cd5ecab7f62d49a442677c7f74b95e91134604fb9c330ec7bb5b250544e223"} Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.329853 4985 scope.go:117] "RemoveContainer" containerID="ff2e4ede92f22c252052c669b18beaa2f7fba2ec3c7930654e6336cf8415f433" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.344303 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-78f74b8b49-ngj6j"] Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.373315 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-db4c676cd-xbwzr"] Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.374542 4985 scope.go:117] "RemoveContainer" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.422771 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-db4c676cd-xbwzr"] Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.443691 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-54bf646c6-b6zb2"] Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.456280 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-54bf646c6-b6zb2"] Jan 28 18:43:19 crc kubenswrapper[4985]: I0128 18:43:19.279776 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" path="/var/lib/kubelet/pods/261340dd-15fd-43d9-8db3-3de095d8728a/volumes" Jan 28 18:43:19 crc kubenswrapper[4985]: I0128 18:43:19.280688 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" path="/var/lib/kubelet/pods/a907310b-926c-4b8e-b3db-b8a43844891c/volumes" Jan 28 18:43:19 crc kubenswrapper[4985]: I0128 18:43:19.281265 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" path="/var/lib/kubelet/pods/f0c2a92a-343c-42fa-a740-8bb10701d271/volumes" Jan 28 18:43:22 crc kubenswrapper[4985]: I0128 18:43:22.502562 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="34d82dad-dc98-4c0f-90c2-0b25f7d73c01" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.12:5671: connect: connection refused" Jan 28 18:43:22 crc kubenswrapper[4985]: I0128 18:43:22.712613 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="249a0e05-d210-402f-b7f8-2caf153346d8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.13:5671: connect: connection refused" Jan 28 18:43:23 crc kubenswrapper[4985]: I0128 18:43:23.267125 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:43:23 crc kubenswrapper[4985]: E0128 18:43:23.267540 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:43:32 crc kubenswrapper[4985]: I0128 18:43:32.504394 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="34d82dad-dc98-4c0f-90c2-0b25f7d73c01" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.12:5671: connect: connection refused" Jan 28 18:43:32 crc kubenswrapper[4985]: I0128 18:43:32.712117 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="249a0e05-d210-402f-b7f8-2caf153346d8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.13:5671: connect: connection refused" Jan 28 18:43:32 crc kubenswrapper[4985]: I0128 18:43:32.734657 4985 scope.go:117] "RemoveContainer" containerID="d7223a7a628a68fecc17a7f4ec70d47a10ad7c02ac73f8bb90091f9b898b7963" Jan 28 18:43:34 crc kubenswrapper[4985]: I0128 18:43:34.264551 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:43:34 crc kubenswrapper[4985]: E0128 18:43:34.265477 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:43:35 crc kubenswrapper[4985]: I0128 18:43:35.830498 4985 scope.go:117] "RemoveContainer" containerID="16a274b711b7c65f8bac3402c7e48f9e20237b3e266544fb803379dddb341a3e" Jan 28 18:43:35 crc kubenswrapper[4985]: I0128 18:43:35.929969 4985 scope.go:117] "RemoveContainer" containerID="66f1056465a2a42e3f35e272ee20feffc3abdbca774c043c1fecefff9950bd98" Jan 28 18:43:36 crc kubenswrapper[4985]: I0128 18:43:36.009140 4985 scope.go:117] "RemoveContainer" containerID="f090f667713f31e333608c60874aca9b174e0dc6eb4e52fb2779980ecf229992" Jan 28 18:43:36 crc kubenswrapper[4985]: I0128 18:43:36.048997 4985 scope.go:117] "RemoveContainer" containerID="12e6aacaa8527f36ddf49eb87d558411736fa67a95ae92f557207b934aed3337" Jan 28 18:43:36 crc kubenswrapper[4985]: I0128 18:43:36.163863 4985 scope.go:117] "RemoveContainer" containerID="9509d6e218ba21bbc37656ba000006afdb482de8a139625efa29d73de7dc2a95" Jan 28 18:43:37 crc kubenswrapper[4985]: E0128 18:43:37.955407 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Jan 28 18:43:37 crc kubenswrapper[4985]: E0128 18:43:37.955524 4985 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Jan 28 18:43:37 crc kubenswrapper[4985]: E0128 18:43:37.955760 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:aodh-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:AodhPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:AodhPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:aodh-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmkqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42402,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod aodh-db-sync-6bqfv_openstack(d276e0b0-f662-443c-a126-003ee44287c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:43:37 crc kubenswrapper[4985]: E0128 18:43:37.957612 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/aodh-db-sync-6bqfv" podUID="d276e0b0-f662-443c-a126-003ee44287c8" Jan 28 18:43:38 crc kubenswrapper[4985]: E0128 18:43:38.578081 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested\\\"\"" pod="openstack/aodh-db-sync-6bqfv" podUID="d276e0b0-f662-443c-a126-003ee44287c8" Jan 28 18:43:42 crc kubenswrapper[4985]: I0128 18:43:42.504159 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:43:42 crc kubenswrapper[4985]: I0128 18:43:42.712044 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="249a0e05-d210-402f-b7f8-2caf153346d8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.13:5671: connect: connection refused" Jan 28 18:43:42 crc kubenswrapper[4985]: I0128 18:43:42.851882 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:43:43 crc kubenswrapper[4985]: I0128 18:43:43.648892 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" event={"ID":"7a5d3484-2192-44a6-b632-5a683af945d6","Type":"ContainerStarted","Data":"e803a48767e57173d8a437957c1d078418a2e9321f0bb9972b4c3e1e7fb17ef1"} Jan 28 18:43:43 crc kubenswrapper[4985]: I0128 18:43:43.683281 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" podStartSLOduration=3.713064809 podStartE2EDuration="59.683221315s" podCreationTimestamp="2026-01-28 18:42:44 +0000 UTC" firstStartedPulling="2026-01-28 18:42:46.879393127 +0000 UTC m=+1777.705955938" lastFinishedPulling="2026-01-28 18:43:42.849549623 +0000 UTC m=+1833.676112444" observedRunningTime="2026-01-28 18:43:43.664181077 +0000 UTC m=+1834.490743898" watchObservedRunningTime="2026-01-28 18:43:43.683221315 +0000 UTC m=+1834.509784136" Jan 28 18:43:49 crc kubenswrapper[4985]: I0128 18:43:49.264953 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:43:49 crc kubenswrapper[4985]: E0128 18:43:49.265574 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:43:52 crc kubenswrapper[4985]: I0128 18:43:52.712453 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 28 18:43:52 crc kubenswrapper[4985]: I0128 18:43:52.788339 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:43:54 crc kubenswrapper[4985]: I0128 18:43:54.798599 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:43:55 crc kubenswrapper[4985]: I0128 18:43:55.842220 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6bqfv" event={"ID":"d276e0b0-f662-443c-a126-003ee44287c8","Type":"ContainerStarted","Data":"7dec6fdf3bc8770aef28236161fb96819a55a36d37cd04df32abd054cd4e7c4d"} Jan 28 18:43:55 crc kubenswrapper[4985]: I0128 18:43:55.866049 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-6bqfv" podStartSLOduration=13.634935864 podStartE2EDuration="50.86602383s" podCreationTimestamp="2026-01-28 18:43:05 +0000 UTC" firstStartedPulling="2026-01-28 18:43:17.564603861 +0000 UTC m=+1808.391166682" lastFinishedPulling="2026-01-28 18:43:54.795691827 +0000 UTC m=+1845.622254648" observedRunningTime="2026-01-28 18:43:55.85891706 +0000 UTC m=+1846.685479881" watchObservedRunningTime="2026-01-28 18:43:55.86602383 +0000 UTC m=+1846.692586671" Jan 28 18:43:57 crc kubenswrapper[4985]: I0128 18:43:57.457066 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="rabbitmq" containerID="cri-o://40373a1abb092cff6ca0fd81aa96440eb2bcdae3ad3cb420a1cbe1ebb7f76247" gracePeriod=604796 Jan 28 18:43:58 crc kubenswrapper[4985]: I0128 18:43:58.879236 4985 generic.go:334] "Generic (PLEG): container finished" podID="7a5d3484-2192-44a6-b632-5a683af945d6" containerID="e803a48767e57173d8a437957c1d078418a2e9321f0bb9972b4c3e1e7fb17ef1" exitCode=0 Jan 28 18:43:58 crc kubenswrapper[4985]: I0128 18:43:58.879323 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" event={"ID":"7a5d3484-2192-44a6-b632-5a683af945d6","Type":"ContainerDied","Data":"e803a48767e57173d8a437957c1d078418a2e9321f0bb9972b4c3e1e7fb17ef1"} Jan 28 18:43:59 crc kubenswrapper[4985]: I0128 18:43:59.860792 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.124888 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.282436 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6897\" (UniqueName: \"kubernetes.io/projected/7a5d3484-2192-44a6-b632-5a683af945d6-kube-api-access-h6897\") pod \"7a5d3484-2192-44a6-b632-5a683af945d6\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.282595 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-repo-setup-combined-ca-bundle\") pod \"7a5d3484-2192-44a6-b632-5a683af945d6\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.283539 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-ssh-key-openstack-edpm-ipam\") pod \"7a5d3484-2192-44a6-b632-5a683af945d6\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.283610 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-inventory\") pod \"7a5d3484-2192-44a6-b632-5a683af945d6\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.287655 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5d3484-2192-44a6-b632-5a683af945d6-kube-api-access-h6897" (OuterVolumeSpecName: "kube-api-access-h6897") pod "7a5d3484-2192-44a6-b632-5a683af945d6" (UID: "7a5d3484-2192-44a6-b632-5a683af945d6"). InnerVolumeSpecName "kube-api-access-h6897". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.287899 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7a5d3484-2192-44a6-b632-5a683af945d6" (UID: "7a5d3484-2192-44a6-b632-5a683af945d6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.319480 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7a5d3484-2192-44a6-b632-5a683af945d6" (UID: "7a5d3484-2192-44a6-b632-5a683af945d6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.327016 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-inventory" (OuterVolumeSpecName: "inventory") pod "7a5d3484-2192-44a6-b632-5a683af945d6" (UID: "7a5d3484-2192-44a6-b632-5a683af945d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.386673 4985 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.386714 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.386724 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.386734 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6897\" (UniqueName: \"kubernetes.io/projected/7a5d3484-2192-44a6-b632-5a683af945d6-kube-api-access-h6897\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.936822 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.936965 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" event={"ID":"7a5d3484-2192-44a6-b632-5a683af945d6","Type":"ContainerDied","Data":"7c7a4afd6d6cdbdaa13f82b8cf1f686b4e15c7a50303b642026bcbf65746941e"} Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.937019 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c7a4afd6d6cdbdaa13f82b8cf1f686b4e15c7a50303b642026bcbf65746941e" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.941025 4985 generic.go:334] "Generic (PLEG): container finished" podID="d276e0b0-f662-443c-a126-003ee44287c8" containerID="7dec6fdf3bc8770aef28236161fb96819a55a36d37cd04df32abd054cd4e7c4d" exitCode=0 Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.941111 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6bqfv" event={"ID":"d276e0b0-f662-443c-a126-003ee44287c8","Type":"ContainerDied","Data":"7dec6fdf3bc8770aef28236161fb96819a55a36d37cd04df32abd054cd4e7c4d"} Jan 28 18:44:02 crc kubenswrapper[4985]: E0128 18:44:02.195502 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a5d3484_2192_44a6_b632_5a683af945d6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a5d3484_2192_44a6_b632_5a683af945d6.slice/crio-7c7a4afd6d6cdbdaa13f82b8cf1f686b4e15c7a50303b642026bcbf65746941e\": RecentStats: unable to find data in memory cache]" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.223715 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j"] Jan 28 18:44:02 crc kubenswrapper[4985]: E0128 18:44:02.224456 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5d3484-2192-44a6-b632-5a683af945d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224481 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5d3484-2192-44a6-b632-5a683af945d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 18:44:02 crc kubenswrapper[4985]: E0128 18:44:02.224499 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224505 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" Jan 28 18:44:02 crc kubenswrapper[4985]: E0128 18:44:02.224531 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224537 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" Jan 28 18:44:02 crc kubenswrapper[4985]: E0128 18:44:02.224551 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" containerName="heat-engine" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224557 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" containerName="heat-engine" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224790 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" containerName="heat-engine" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224811 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224832 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224853 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5d3484-2192-44a6-b632-5a683af945d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.225934 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.229719 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.229756 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.229789 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.229724 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.251775 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j"] Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.265283 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:44:02 crc kubenswrapper[4985]: E0128 18:44:02.265850 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.313666 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.313848 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5djps\" (UniqueName: \"kubernetes.io/projected/3b94af3f-603c-4a3e-966e-7a4bfbc78178-kube-api-access-5djps\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.313916 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.416337 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.416686 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.416983 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5djps\" (UniqueName: \"kubernetes.io/projected/3b94af3f-603c-4a3e-966e-7a4bfbc78178-kube-api-access-5djps\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.424218 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.424471 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.438020 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5djps\" (UniqueName: \"kubernetes.io/projected/3b94af3f-603c-4a3e-966e-7a4bfbc78178-kube-api-access-5djps\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.549358 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.097433 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j"] Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.267164 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.441171 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-config-data\") pod \"d276e0b0-f662-443c-a126-003ee44287c8\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.441384 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-combined-ca-bundle\") pod \"d276e0b0-f662-443c-a126-003ee44287c8\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.441414 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-scripts\") pod \"d276e0b0-f662-443c-a126-003ee44287c8\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.442211 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmkqr\" (UniqueName: \"kubernetes.io/projected/d276e0b0-f662-443c-a126-003ee44287c8-kube-api-access-fmkqr\") pod \"d276e0b0-f662-443c-a126-003ee44287c8\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.447007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-scripts" (OuterVolumeSpecName: "scripts") pod "d276e0b0-f662-443c-a126-003ee44287c8" (UID: "d276e0b0-f662-443c-a126-003ee44287c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.452098 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d276e0b0-f662-443c-a126-003ee44287c8-kube-api-access-fmkqr" (OuterVolumeSpecName: "kube-api-access-fmkqr") pod "d276e0b0-f662-443c-a126-003ee44287c8" (UID: "d276e0b0-f662-443c-a126-003ee44287c8"). InnerVolumeSpecName "kube-api-access-fmkqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.475281 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d276e0b0-f662-443c-a126-003ee44287c8" (UID: "d276e0b0-f662-443c-a126-003ee44287c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.484122 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-config-data" (OuterVolumeSpecName: "config-data") pod "d276e0b0-f662-443c-a126-003ee44287c8" (UID: "d276e0b0-f662-443c-a126-003ee44287c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.545531 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.545572 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.545585 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmkqr\" (UniqueName: \"kubernetes.io/projected/d276e0b0-f662-443c-a126-003ee44287c8-kube-api-access-fmkqr\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.545603 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.963646 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" event={"ID":"3b94af3f-603c-4a3e-966e-7a4bfbc78178","Type":"ContainerStarted","Data":"ecdced9e50dc70f2eb69194df14784349ed0af2d4baa3abe5de9f65f07e14e66"} Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.963691 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" event={"ID":"3b94af3f-603c-4a3e-966e-7a4bfbc78178","Type":"ContainerStarted","Data":"99e90286bb93168beee09d961f200ea37eff2b69082fa47f4c51a1f62dd08a43"} Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.966977 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6bqfv" event={"ID":"d276e0b0-f662-443c-a126-003ee44287c8","Type":"ContainerDied","Data":"ecdfc8afa4f2b868f84dc5832f39a80a33774a8c5d26cccc6c2784958c84b2cf"} Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.967015 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecdfc8afa4f2b868f84dc5832f39a80a33774a8c5d26cccc6c2784958c84b2cf" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.967018 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:44:04 crc kubenswrapper[4985]: I0128 18:44:04.001278 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" podStartSLOduration=1.520143616 podStartE2EDuration="2.001209411s" podCreationTimestamp="2026-01-28 18:44:02 +0000 UTC" firstStartedPulling="2026-01-28 18:44:03.098750777 +0000 UTC m=+1853.925313598" lastFinishedPulling="2026-01-28 18:44:03.579816572 +0000 UTC m=+1854.406379393" observedRunningTime="2026-01-28 18:44:03.986926078 +0000 UTC m=+1854.813488899" watchObservedRunningTime="2026-01-28 18:44:04.001209411 +0000 UTC m=+1854.827772232" Jan 28 18:44:05 crc kubenswrapper[4985]: I0128 18:44:05.825577 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 28 18:44:05 crc kubenswrapper[4985]: I0128 18:44:05.826211 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-api" containerID="cri-o://352c03bb8c26c1882850fe5aac45fc2c005c430ba571346b869f13a0a01a7ae7" gracePeriod=30 Jan 28 18:44:05 crc kubenswrapper[4985]: I0128 18:44:05.826382 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-evaluator" containerID="cri-o://a5427ec62937c76e656c69cbc0cb1d25355ec92c6e45ce8c43e5e2fc0b2aa895" gracePeriod=30 Jan 28 18:44:05 crc kubenswrapper[4985]: I0128 18:44:05.826473 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-notifier" containerID="cri-o://0ca922d725193f731de31c12f898c60af2c134f41e240b2f16a4ae9def302a65" gracePeriod=30 Jan 28 18:44:05 crc kubenswrapper[4985]: I0128 18:44:05.826682 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-listener" containerID="cri-o://3f619d361f2082394dafaa75e905aac02d4c442e242a675a1f30d1c46ea1e731" gracePeriod=30 Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.025321 4985 generic.go:334] "Generic (PLEG): container finished" podID="313d3857-140a-4a66-8329-12453fc8dd4c" containerID="40373a1abb092cff6ca0fd81aa96440eb2bcdae3ad3cb420a1cbe1ebb7f76247" exitCode=0 Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.025409 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"313d3857-140a-4a66-8329-12453fc8dd4c","Type":"ContainerDied","Data":"40373a1abb092cff6ca0fd81aa96440eb2bcdae3ad3cb420a1cbe1ebb7f76247"} Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.029031 4985 generic.go:334] "Generic (PLEG): container finished" podID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerID="a5427ec62937c76e656c69cbc0cb1d25355ec92c6e45ce8c43e5e2fc0b2aa895" exitCode=0 Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.029054 4985 generic.go:334] "Generic (PLEG): container finished" podID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerID="352c03bb8c26c1882850fe5aac45fc2c005c430ba571346b869f13a0a01a7ae7" exitCode=0 Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.029121 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerDied","Data":"a5427ec62937c76e656c69cbc0cb1d25355ec92c6e45ce8c43e5e2fc0b2aa895"} Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.029167 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerDied","Data":"352c03bb8c26c1882850fe5aac45fc2c005c430ba571346b869f13a0a01a7ae7"} Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.032456 4985 generic.go:334] "Generic (PLEG): container finished" podID="3b94af3f-603c-4a3e-966e-7a4bfbc78178" containerID="ecdced9e50dc70f2eb69194df14784349ed0af2d4baa3abe5de9f65f07e14e66" exitCode=0 Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.032491 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" event={"ID":"3b94af3f-603c-4a3e-966e-7a4bfbc78178","Type":"ContainerDied","Data":"ecdced9e50dc70f2eb69194df14784349ed0af2d4baa3abe5de9f65f07e14e66"} Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.297681 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.459217 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-config-data\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.459608 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-erlang-cookie\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.459786 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-plugins-conf\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.459851 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-server-conf\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.459929 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t6vc\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-kube-api-access-7t6vc\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.459994 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-tls\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.460901 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.460965 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/313d3857-140a-4a66-8329-12453fc8dd4c-pod-info\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.460998 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-plugins\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.461030 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/313d3857-140a-4a66-8329-12453fc8dd4c-erlang-cookie-secret\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.461114 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-confd\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.461309 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.462800 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.463393 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.463563 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.463605 4985 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.466060 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/313d3857-140a-4a66-8329-12453fc8dd4c-pod-info" (OuterVolumeSpecName: "pod-info") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.471378 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313d3857-140a-4a66-8329-12453fc8dd4c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.488211 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.488330 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-kube-api-access-7t6vc" (OuterVolumeSpecName: "kube-api-access-7t6vc") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "kube-api-access-7t6vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.499982 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832" (OuterVolumeSpecName: "persistence") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "pvc-4b595522-7516-4d20-a11a-582dd7716832". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.501428 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-config-data" (OuterVolumeSpecName: "config-data") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.557883 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-server-conf" (OuterVolumeSpecName: "server-conf") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566086 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566110 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566120 4985 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566129 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t6vc\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-kube-api-access-7t6vc\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566137 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566166 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") on node \"crc\" " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566177 4985 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/313d3857-140a-4a66-8329-12453fc8dd4c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566186 4985 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/313d3857-140a-4a66-8329-12453fc8dd4c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.607998 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.608193 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4b595522-7516-4d20-a11a-582dd7716832" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832") on node "crc" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.615613 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.668701 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.668751 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.044119 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"313d3857-140a-4a66-8329-12453fc8dd4c","Type":"ContainerDied","Data":"17211bf5e9b8b8c383ea958cf8ed251d1d40c28a9c6c3e8e814a8d59072a3363"} Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.044163 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.044184 4985 scope.go:117] "RemoveContainer" containerID="40373a1abb092cff6ca0fd81aa96440eb2bcdae3ad3cb420a1cbe1ebb7f76247" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.097380 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.104663 4985 scope.go:117] "RemoveContainer" containerID="4546478e3b48ee65a1e4f5b248d4caed2739a0baae4f2cf1c67d5da021b79ce7" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.120286 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.146057 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:44:08 crc kubenswrapper[4985]: E0128 18:44:08.147092 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="rabbitmq" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.147118 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="rabbitmq" Jan 28 18:44:08 crc kubenswrapper[4985]: E0128 18:44:08.147142 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="setup-container" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.147151 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="setup-container" Jan 28 18:44:08 crc kubenswrapper[4985]: E0128 18:44:08.147196 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d276e0b0-f662-443c-a126-003ee44287c8" containerName="aodh-db-sync" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.147205 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d276e0b0-f662-443c-a126-003ee44287c8" containerName="aodh-db-sync" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.147567 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="rabbitmq" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.147599 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d276e0b0-f662-443c-a126-003ee44287c8" containerName="aodh-db-sync" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.149451 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.189343 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.284359 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf27z\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-kube-api-access-zf27z\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.284423 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae555e00-c2df-4fce-af07-a91133f8767d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.284563 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.284620 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.284692 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.287486 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.287626 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.287742 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.287924 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae555e00-c2df-4fce-af07-a91133f8767d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.288128 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-config-data\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.291625 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.395664 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.395735 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.395827 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.395911 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.395941 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.396027 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae555e00-c2df-4fce-af07-a91133f8767d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.396105 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-config-data\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.396996 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.397032 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-config-data\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.397126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.397322 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf27z\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-kube-api-access-zf27z\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.397361 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae555e00-c2df-4fce-af07-a91133f8767d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.397493 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.398489 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.404930 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.404954 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae555e00-c2df-4fce-af07-a91133f8767d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.406046 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.407632 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.408267 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.408291 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ce250563889cf210f76b1961aa7444b8cbe0d3f306db896236b924f9bdc2ed03/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.411925 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.417971 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf27z\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-kube-api-access-zf27z\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.421995 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae555e00-c2df-4fce-af07-a91133f8767d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.537551 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.544502 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.701088 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.815265 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-inventory\") pod \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.815794 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam\") pod \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.815829 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5djps\" (UniqueName: \"kubernetes.io/projected/3b94af3f-603c-4a3e-966e-7a4bfbc78178-kube-api-access-5djps\") pod \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.822927 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b94af3f-603c-4a3e-966e-7a4bfbc78178-kube-api-access-5djps" (OuterVolumeSpecName: "kube-api-access-5djps") pod "3b94af3f-603c-4a3e-966e-7a4bfbc78178" (UID: "3b94af3f-603c-4a3e-966e-7a4bfbc78178"). InnerVolumeSpecName "kube-api-access-5djps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:08 crc kubenswrapper[4985]: E0128 18:44:08.849483 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam podName:3b94af3f-603c-4a3e-966e-7a4bfbc78178 nodeName:}" failed. No retries permitted until 2026-01-28 18:44:09.349459576 +0000 UTC m=+1860.176022397 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam") pod "3b94af3f-603c-4a3e-966e-7a4bfbc78178" (UID: "3b94af3f-603c-4a3e-966e-7a4bfbc78178") : error deleting /var/lib/kubelet/pods/3b94af3f-603c-4a3e-966e-7a4bfbc78178/volume-subpaths: remove /var/lib/kubelet/pods/3b94af3f-603c-4a3e-966e-7a4bfbc78178/volume-subpaths: no such file or directory Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.853918 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-inventory" (OuterVolumeSpecName: "inventory") pod "3b94af3f-603c-4a3e-966e-7a4bfbc78178" (UID: "3b94af3f-603c-4a3e-966e-7a4bfbc78178"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.919469 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.919514 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5djps\" (UniqueName: \"kubernetes.io/projected/3b94af3f-603c-4a3e-966e-7a4bfbc78178-kube-api-access-5djps\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.046667 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.059673 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.059843 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" event={"ID":"3b94af3f-603c-4a3e-966e-7a4bfbc78178","Type":"ContainerDied","Data":"99e90286bb93168beee09d961f200ea37eff2b69082fa47f4c51a1f62dd08a43"} Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.059886 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99e90286bb93168beee09d961f200ea37eff2b69082fa47f4c51a1f62dd08a43" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.127710 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx"] Jan 28 18:44:09 crc kubenswrapper[4985]: E0128 18:44:09.128186 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b94af3f-603c-4a3e-966e-7a4bfbc78178" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.128203 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b94af3f-603c-4a3e-966e-7a4bfbc78178" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.128467 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b94af3f-603c-4a3e-966e-7a4bfbc78178" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.129306 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.170483 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx"] Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.227336 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.227382 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.227641 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.227747 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6ll\" (UniqueName: \"kubernetes.io/projected/3865f1db-f707-4b28-bbf2-8ce1975baa1f-kube-api-access-4r6ll\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.277150 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" path="/var/lib/kubelet/pods/313d3857-140a-4a66-8329-12453fc8dd4c/volumes" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.329736 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.329969 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.330146 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.330205 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r6ll\" (UniqueName: \"kubernetes.io/projected/3865f1db-f707-4b28-bbf2-8ce1975baa1f-kube-api-access-4r6ll\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.333754 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.333916 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.334112 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.355320 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r6ll\" (UniqueName: \"kubernetes.io/projected/3865f1db-f707-4b28-bbf2-8ce1975baa1f-kube-api-access-4r6ll\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.431511 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam\") pod \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.434742 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b94af3f-603c-4a3e-966e-7a4bfbc78178" (UID: "3b94af3f-603c-4a3e-966e-7a4bfbc78178"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.458417 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.535104 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:10 crc kubenswrapper[4985]: I0128 18:44:10.045314 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx"] Jan 28 18:44:10 crc kubenswrapper[4985]: W0128 18:44:10.045774 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3865f1db_f707_4b28_bbf2_8ce1975baa1f.slice/crio-1bfb1cd976d4fbd706984e82e00454ee0234df3e9f729b27a0e1988a842cf66b WatchSource:0}: Error finding container 1bfb1cd976d4fbd706984e82e00454ee0234df3e9f729b27a0e1988a842cf66b: Status 404 returned error can't find the container with id 1bfb1cd976d4fbd706984e82e00454ee0234df3e9f729b27a0e1988a842cf66b Jan 28 18:44:10 crc kubenswrapper[4985]: I0128 18:44:10.073503 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ae555e00-c2df-4fce-af07-a91133f8767d","Type":"ContainerStarted","Data":"cc0f2c6847c1a9b5425f85e49cf7204693ce4a7d7259a408948f5275caec3ac2"} Jan 28 18:44:10 crc kubenswrapper[4985]: I0128 18:44:10.075391 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" event={"ID":"3865f1db-f707-4b28-bbf2-8ce1975baa1f","Type":"ContainerStarted","Data":"1bfb1cd976d4fbd706984e82e00454ee0234df3e9f729b27a0e1988a842cf66b"} Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.099677 4985 generic.go:334] "Generic (PLEG): container finished" podID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerID="3f619d361f2082394dafaa75e905aac02d4c442e242a675a1f30d1c46ea1e731" exitCode=0 Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.100254 4985 generic.go:334] "Generic (PLEG): container finished" podID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerID="0ca922d725193f731de31c12f898c60af2c134f41e240b2f16a4ae9def302a65" exitCode=0 Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.100135 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerDied","Data":"3f619d361f2082394dafaa75e905aac02d4c442e242a675a1f30d1c46ea1e731"} Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.100395 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerDied","Data":"0ca922d725193f731de31c12f898c60af2c134f41e240b2f16a4ae9def302a65"} Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.104109 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ae555e00-c2df-4fce-af07-a91133f8767d","Type":"ContainerStarted","Data":"3f596ee94730f42a50d8192fb4c5ca1568a36162c5e3f9d2fddd534fad4f30ed"} Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.106000 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" event={"ID":"3865f1db-f707-4b28-bbf2-8ce1975baa1f","Type":"ContainerStarted","Data":"bc9afc05871aa23d4c3db1d4e88d2efe8c3615724cb67da049ef34770cd610ef"} Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.151133 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" podStartSLOduration=1.568404565 podStartE2EDuration="2.151110099s" podCreationTimestamp="2026-01-28 18:44:09 +0000 UTC" firstStartedPulling="2026-01-28 18:44:10.04836429 +0000 UTC m=+1860.874927111" lastFinishedPulling="2026-01-28 18:44:10.631069824 +0000 UTC m=+1861.457632645" observedRunningTime="2026-01-28 18:44:11.127370189 +0000 UTC m=+1861.953933030" watchObservedRunningTime="2026-01-28 18:44:11.151110099 +0000 UTC m=+1861.977672930" Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.866530 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.011001 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-combined-ca-bundle\") pod \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.011195 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rndb9\" (UniqueName: \"kubernetes.io/projected/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-kube-api-access-rndb9\") pod \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.011242 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-config-data\") pod \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.011367 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-scripts\") pod \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.011404 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-internal-tls-certs\") pod \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.011464 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-public-tls-certs\") pod \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.020068 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-kube-api-access-rndb9" (OuterVolumeSpecName: "kube-api-access-rndb9") pod "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" (UID: "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e"). InnerVolumeSpecName "kube-api-access-rndb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.027401 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-scripts" (OuterVolumeSpecName: "scripts") pod "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" (UID: "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.087299 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" (UID: "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.109790 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" (UID: "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.114728 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rndb9\" (UniqueName: \"kubernetes.io/projected/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-kube-api-access-rndb9\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.114776 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.114791 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.114805 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.140659 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.140962 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerDied","Data":"bc5e5343b1013225c0f09fa05053ffaef8f092c7d05aeab8940382306b98a83a"} Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.141136 4985 scope.go:117] "RemoveContainer" containerID="3f619d361f2082394dafaa75e905aac02d4c442e242a675a1f30d1c46ea1e731" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.182482 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-config-data" (OuterVolumeSpecName: "config-data") pod "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" (UID: "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.213579 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" (UID: "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.217103 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.217135 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.266338 4985 scope.go:117] "RemoveContainer" containerID="0ca922d725193f731de31c12f898c60af2c134f41e240b2f16a4ae9def302a65" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.298720 4985 scope.go:117] "RemoveContainer" containerID="a5427ec62937c76e656c69cbc0cb1d25355ec92c6e45ce8c43e5e2fc0b2aa895" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.326039 4985 scope.go:117] "RemoveContainer" containerID="352c03bb8c26c1882850fe5aac45fc2c005c430ba571346b869f13a0a01a7ae7" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.505313 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.525319 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.536724 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 28 18:44:12 crc kubenswrapper[4985]: E0128 18:44:12.537397 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-evaluator" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537422 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-evaluator" Jan 28 18:44:12 crc kubenswrapper[4985]: E0128 18:44:12.537446 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-notifier" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537454 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-notifier" Jan 28 18:44:12 crc kubenswrapper[4985]: E0128 18:44:12.537476 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-api" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537483 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-api" Jan 28 18:44:12 crc kubenswrapper[4985]: E0128 18:44:12.537510 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-listener" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537518 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-listener" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537787 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-listener" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537814 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-api" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537829 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-evaluator" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537840 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-notifier" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.540097 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.544937 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.545134 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.545265 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bbsjj" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.546569 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.546894 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.547816 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 18:44:12 crc kubenswrapper[4985]: E0128 18:44:12.579641 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dfcde6a_1a5e_454b_8fdb_29b33c0bb80e.slice\": RecentStats: unable to find data in memory cache]" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.744294 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-config-data\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.744377 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.744423 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-scripts\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.744489 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht5s6\" (UniqueName: \"kubernetes.io/projected/9f75cd8d-6a02-43e4-8e58-92f8d024311b-kube-api-access-ht5s6\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.744521 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-internal-tls-certs\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.744597 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-public-tls-certs\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.847672 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-config-data\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.847734 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.847767 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-scripts\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.847805 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht5s6\" (UniqueName: \"kubernetes.io/projected/9f75cd8d-6a02-43e4-8e58-92f8d024311b-kube-api-access-ht5s6\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.847828 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-internal-tls-certs\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.847874 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-public-tls-certs\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.853188 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-scripts\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.855470 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-config-data\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.856187 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.857614 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-public-tls-certs\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.862440 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-internal-tls-certs\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.870328 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht5s6\" (UniqueName: \"kubernetes.io/projected/9f75cd8d-6a02-43e4-8e58-92f8d024311b-kube-api-access-ht5s6\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:13 crc kubenswrapper[4985]: I0128 18:44:13.165235 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:44:13 crc kubenswrapper[4985]: I0128 18:44:13.278983 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" path="/var/lib/kubelet/pods/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e/volumes" Jan 28 18:44:13 crc kubenswrapper[4985]: I0128 18:44:13.669218 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 18:44:14 crc kubenswrapper[4985]: I0128 18:44:14.163874 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9f75cd8d-6a02-43e4-8e58-92f8d024311b","Type":"ContainerStarted","Data":"599f433e6e07f7f29b55761c870470f88d9785648c856771468211fdd5b0b9d5"} Jan 28 18:44:15 crc kubenswrapper[4985]: I0128 18:44:15.179030 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9f75cd8d-6a02-43e4-8e58-92f8d024311b","Type":"ContainerStarted","Data":"6a16e29998d0204774709ad186ac56ea5ecfa8ddcb3a94af744722bfa2f69164"} Jan 28 18:44:16 crc kubenswrapper[4985]: I0128 18:44:16.198751 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9f75cd8d-6a02-43e4-8e58-92f8d024311b","Type":"ContainerStarted","Data":"2ee75215963d47e4abc8bbc03a7bc027dbf8f4a5eb9d5f4a75453b2088dea6b2"} Jan 28 18:44:16 crc kubenswrapper[4985]: I0128 18:44:16.268228 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:44:16 crc kubenswrapper[4985]: E0128 18:44:16.269142 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:44:17 crc kubenswrapper[4985]: I0128 18:44:17.215988 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9f75cd8d-6a02-43e4-8e58-92f8d024311b","Type":"ContainerStarted","Data":"f77be6508118811e0e0c175857c64ed4c215da705cbccb44e6e372e011e9bb6e"} Jan 28 18:44:19 crc kubenswrapper[4985]: I0128 18:44:19.239697 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9f75cd8d-6a02-43e4-8e58-92f8d024311b","Type":"ContainerStarted","Data":"a8c39c795be1a0f809d3e3083127dedc1663461a6d6f386ad6a1df590232c344"} Jan 28 18:44:19 crc kubenswrapper[4985]: I0128 18:44:19.280994 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.056400966 podStartE2EDuration="7.280975959s" podCreationTimestamp="2026-01-28 18:44:12 +0000 UTC" firstStartedPulling="2026-01-28 18:44:13.696025962 +0000 UTC m=+1864.522588783" lastFinishedPulling="2026-01-28 18:44:17.920600955 +0000 UTC m=+1868.747163776" observedRunningTime="2026-01-28 18:44:19.272098399 +0000 UTC m=+1870.098661280" watchObservedRunningTime="2026-01-28 18:44:19.280975959 +0000 UTC m=+1870.107538770" Jan 28 18:44:28 crc kubenswrapper[4985]: I0128 18:44:28.265433 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:44:28 crc kubenswrapper[4985]: E0128 18:44:28.266298 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:44:37 crc kubenswrapper[4985]: I0128 18:44:37.131582 4985 scope.go:117] "RemoveContainer" containerID="d27c06d418e20207c2740cbbbe652b37993ed962b6ece756db68f47e6fdcdfce" Jan 28 18:44:37 crc kubenswrapper[4985]: I0128 18:44:37.168230 4985 scope.go:117] "RemoveContainer" containerID="1c42c60ea57a6197ce6f5b78eaab66b518ac9296d9bfa8c605b8d293dcd46e71" Jan 28 18:44:37 crc kubenswrapper[4985]: I0128 18:44:37.244143 4985 scope.go:117] "RemoveContainer" containerID="c2123433fc9db86b4e9f9ac84736c01949000210bd3cce880a9a4ecb7af8212e" Jan 28 18:44:43 crc kubenswrapper[4985]: I0128 18:44:43.268510 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:44:43 crc kubenswrapper[4985]: I0128 18:44:43.802652 4985 generic.go:334] "Generic (PLEG): container finished" podID="ae555e00-c2df-4fce-af07-a91133f8767d" containerID="3f596ee94730f42a50d8192fb4c5ca1568a36162c5e3f9d2fddd534fad4f30ed" exitCode=0 Jan 28 18:44:43 crc kubenswrapper[4985]: I0128 18:44:43.802748 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ae555e00-c2df-4fce-af07-a91133f8767d","Type":"ContainerDied","Data":"3f596ee94730f42a50d8192fb4c5ca1568a36162c5e3f9d2fddd534fad4f30ed"} Jan 28 18:44:43 crc kubenswrapper[4985]: I0128 18:44:43.806009 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"b39401ce5f91585a2b4b22e75d0e797d75465500360ec9051ef07c933730fe87"} Jan 28 18:44:44 crc kubenswrapper[4985]: I0128 18:44:44.823849 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ae555e00-c2df-4fce-af07-a91133f8767d","Type":"ContainerStarted","Data":"85ace350c9eb3209c1e405e7336cf4947ba7e03f10c6bdca9e56f9a095a2540e"} Jan 28 18:44:44 crc kubenswrapper[4985]: I0128 18:44:44.824643 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 28 18:44:44 crc kubenswrapper[4985]: I0128 18:44:44.868375 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=36.868354673 podStartE2EDuration="36.868354673s" podCreationTimestamp="2026-01-28 18:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:44:44.854600744 +0000 UTC m=+1895.681163565" watchObservedRunningTime="2026-01-28 18:44:44.868354673 +0000 UTC m=+1895.694917494" Jan 28 18:44:58 crc kubenswrapper[4985]: I0128 18:44:58.549547 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 28 18:44:58 crc kubenswrapper[4985]: I0128 18:44:58.680391 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.161106 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx"] Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.163330 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.177405 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx"] Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.180505 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.180795 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.235109 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5fcm\" (UniqueName: \"kubernetes.io/projected/62198283-1005-48a7-91a7-44d4240224ef-kube-api-access-j5fcm\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.235398 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62198283-1005-48a7-91a7-44d4240224ef-config-volume\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.235684 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62198283-1005-48a7-91a7-44d4240224ef-secret-volume\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.338227 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62198283-1005-48a7-91a7-44d4240224ef-config-volume\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.338501 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62198283-1005-48a7-91a7-44d4240224ef-secret-volume\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.338526 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5fcm\" (UniqueName: \"kubernetes.io/projected/62198283-1005-48a7-91a7-44d4240224ef-kube-api-access-j5fcm\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.339920 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62198283-1005-48a7-91a7-44d4240224ef-config-volume\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.352549 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62198283-1005-48a7-91a7-44d4240224ef-secret-volume\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.364243 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5fcm\" (UniqueName: \"kubernetes.io/projected/62198283-1005-48a7-91a7-44d4240224ef-kube-api-access-j5fcm\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.553838 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:01 crc kubenswrapper[4985]: I0128 18:45:01.061804 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx"] Jan 28 18:45:02 crc kubenswrapper[4985]: I0128 18:45:02.013147 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" event={"ID":"62198283-1005-48a7-91a7-44d4240224ef","Type":"ContainerStarted","Data":"e7f4c4199443b277fce34519a5f0cc3daf60a217d86701b9fd4cb717d8480164"} Jan 28 18:45:02 crc kubenswrapper[4985]: I0128 18:45:02.013780 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" event={"ID":"62198283-1005-48a7-91a7-44d4240224ef","Type":"ContainerStarted","Data":"1ce94eac799321de69e9c9fc5fc48746bb0c136d311f15aa248ff7840a09e662"} Jan 28 18:45:02 crc kubenswrapper[4985]: I0128 18:45:02.028862 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" podStartSLOduration=2.028845228 podStartE2EDuration="2.028845228s" podCreationTimestamp="2026-01-28 18:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:45:02.028008134 +0000 UTC m=+1912.854570955" watchObservedRunningTime="2026-01-28 18:45:02.028845228 +0000 UTC m=+1912.855408049" Jan 28 18:45:03 crc kubenswrapper[4985]: I0128 18:45:03.025994 4985 generic.go:334] "Generic (PLEG): container finished" podID="62198283-1005-48a7-91a7-44d4240224ef" containerID="e7f4c4199443b277fce34519a5f0cc3daf60a217d86701b9fd4cb717d8480164" exitCode=0 Jan 28 18:45:03 crc kubenswrapper[4985]: I0128 18:45:03.026042 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" event={"ID":"62198283-1005-48a7-91a7-44d4240224ef","Type":"ContainerDied","Data":"e7f4c4199443b277fce34519a5f0cc3daf60a217d86701b9fd4cb717d8480164"} Jan 28 18:45:03 crc kubenswrapper[4985]: I0128 18:45:03.371674 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" containerID="cri-o://ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d" gracePeriod=604796 Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.582826 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.666182 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62198283-1005-48a7-91a7-44d4240224ef-config-volume\") pod \"62198283-1005-48a7-91a7-44d4240224ef\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.666377 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5fcm\" (UniqueName: \"kubernetes.io/projected/62198283-1005-48a7-91a7-44d4240224ef-kube-api-access-j5fcm\") pod \"62198283-1005-48a7-91a7-44d4240224ef\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.666760 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62198283-1005-48a7-91a7-44d4240224ef-secret-volume\") pod \"62198283-1005-48a7-91a7-44d4240224ef\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.666931 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62198283-1005-48a7-91a7-44d4240224ef-config-volume" (OuterVolumeSpecName: "config-volume") pod "62198283-1005-48a7-91a7-44d4240224ef" (UID: "62198283-1005-48a7-91a7-44d4240224ef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.668041 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62198283-1005-48a7-91a7-44d4240224ef-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.674231 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62198283-1005-48a7-91a7-44d4240224ef-kube-api-access-j5fcm" (OuterVolumeSpecName: "kube-api-access-j5fcm") pod "62198283-1005-48a7-91a7-44d4240224ef" (UID: "62198283-1005-48a7-91a7-44d4240224ef"). InnerVolumeSpecName "kube-api-access-j5fcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.678508 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62198283-1005-48a7-91a7-44d4240224ef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "62198283-1005-48a7-91a7-44d4240224ef" (UID: "62198283-1005-48a7-91a7-44d4240224ef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.770372 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62198283-1005-48a7-91a7-44d4240224ef-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.770411 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5fcm\" (UniqueName: \"kubernetes.io/projected/62198283-1005-48a7-91a7-44d4240224ef-kube-api-access-j5fcm\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:05 crc kubenswrapper[4985]: I0128 18:45:05.053057 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" event={"ID":"62198283-1005-48a7-91a7-44d4240224ef","Type":"ContainerDied","Data":"1ce94eac799321de69e9c9fc5fc48746bb0c136d311f15aa248ff7840a09e662"} Jan 28 18:45:05 crc kubenswrapper[4985]: I0128 18:45:05.053619 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ce94eac799321de69e9c9fc5fc48746bb0c136d311f15aa248ff7840a09e662" Jan 28 18:45:05 crc kubenswrapper[4985]: I0128 18:45:05.053345 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.030516 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.123636 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-erlang-cookie\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.123735 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4mrw\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-kube-api-access-r4mrw\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.123785 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-erlang-cookie-secret\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.123886 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-plugins-conf\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.123953 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-confd\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.123987 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-server-conf\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.124058 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-config-data\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.124089 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-plugins\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.124128 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-pod-info\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.124161 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-tls\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.125497 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.129594 4985 generic.go:334] "Generic (PLEG): container finished" podID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerID="ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d" exitCode=0 Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.129653 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541","Type":"ContainerDied","Data":"ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d"} Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.129687 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541","Type":"ContainerDied","Data":"210b9569d6c0ecf168f35cbf15fa409f7c78272e84c7d067b7d52ec043eaaf23"} Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.129707 4985 scope.go:117] "RemoveContainer" containerID="ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.129898 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.129991 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.130738 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.132080 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.136828 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.144542 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-kube-api-access-r4mrw" (OuterVolumeSpecName: "kube-api-access-r4mrw") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "kube-api-access-r4mrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.180738 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.191998 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-pod-info" (OuterVolumeSpecName: "pod-info") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.197640 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03" (OuterVolumeSpecName: "persistence") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.208328 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-config-data" (OuterVolumeSpecName: "config-data") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237539 4985 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237586 4985 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237597 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237606 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237619 4985 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237632 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237676 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") on node \"crc\" " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237695 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237709 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4mrw\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-kube-api-access-r4mrw\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.283305 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-server-conf" (OuterVolumeSpecName: "server-conf") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.292992 4985 scope.go:117] "RemoveContainer" containerID="51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.294974 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.295216 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03") on node "crc" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.339728 4985 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.339773 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.345935 4985 scope.go:117] "RemoveContainer" containerID="ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d" Jan 28 18:45:10 crc kubenswrapper[4985]: E0128 18:45:10.347050 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d\": container with ID starting with ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d not found: ID does not exist" containerID="ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.347097 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d"} err="failed to get container status \"ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d\": rpc error: code = NotFound desc = could not find container \"ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d\": container with ID starting with ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d not found: ID does not exist" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.347123 4985 scope.go:117] "RemoveContainer" containerID="51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517" Jan 28 18:45:10 crc kubenswrapper[4985]: E0128 18:45:10.347414 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517\": container with ID starting with 51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517 not found: ID does not exist" containerID="51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.347443 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517"} err="failed to get container status \"51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517\": rpc error: code = NotFound desc = could not find container \"51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517\": container with ID starting with 51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517 not found: ID does not exist" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.352094 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.441599 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.482993 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.504271 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.517693 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:45:10 crc kubenswrapper[4985]: E0128 18:45:10.518344 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="setup-container" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.518367 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="setup-container" Jan 28 18:45:10 crc kubenswrapper[4985]: E0128 18:45:10.518396 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62198283-1005-48a7-91a7-44d4240224ef" containerName="collect-profiles" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.518405 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="62198283-1005-48a7-91a7-44d4240224ef" containerName="collect-profiles" Jan 28 18:45:10 crc kubenswrapper[4985]: E0128 18:45:10.518422 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.518430 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.518724 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="62198283-1005-48a7-91a7-44d4240224ef" containerName="collect-profiles" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.518757 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.520291 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.534623 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656066 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656160 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656229 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-config-data\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656327 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656353 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656438 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656473 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656493 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656547 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656565 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656579 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4wkn\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-kube-api-access-w4wkn\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758323 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758387 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758429 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-config-data\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758471 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758500 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758561 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758589 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758609 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758649 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758666 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wkn\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-kube-api-access-w4wkn\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758681 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.759352 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.759574 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.760054 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.760558 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.761045 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-config-data\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.761242 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.761303 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3c775c7dad0eb68939020e6ac39de7a8b8505e50517c4739aca512474a1c5503/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.764729 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.764915 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.770592 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.772986 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.777026 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wkn\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-kube-api-access-w4wkn\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.835609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.856508 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:45:11 crc kubenswrapper[4985]: I0128 18:45:11.279806 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" path="/var/lib/kubelet/pods/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541/volumes" Jan 28 18:45:11 crc kubenswrapper[4985]: I0128 18:45:11.374037 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:45:12 crc kubenswrapper[4985]: I0128 18:45:12.164162 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe","Type":"ContainerStarted","Data":"0b8fe0b05d817e6602bab1697f2117e1cc7cb2712aee0c798c6e6d8d4c1ecee2"} Jan 28 18:45:13 crc kubenswrapper[4985]: I0128 18:45:13.187158 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe","Type":"ContainerStarted","Data":"16f535ef854b9c0ece73b0832601c36f1589afcd2ce2c474cd161032d681a6ab"} Jan 28 18:45:14 crc kubenswrapper[4985]: I0128 18:45:14.835623 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: i/o timeout" Jan 28 18:45:37 crc kubenswrapper[4985]: I0128 18:45:37.386589 4985 scope.go:117] "RemoveContainer" containerID="a38360ca0387e0827a57f03126984e0a20e5b118f82925b6ad3b02f72f4d6f3b" Jan 28 18:45:37 crc kubenswrapper[4985]: I0128 18:45:37.415606 4985 scope.go:117] "RemoveContainer" containerID="ebfc9ea99db013235f5adee2c18ba99af05a9f8dc3abaf0616d7d804e0cb54cc" Jan 28 18:45:37 crc kubenswrapper[4985]: I0128 18:45:37.445387 4985 scope.go:117] "RemoveContainer" containerID="6264c75e309967c9f20db46eab077cb1a5ee5f417ccd8f79e08cda266fd4cda5" Jan 28 18:45:37 crc kubenswrapper[4985]: I0128 18:45:37.535994 4985 scope.go:117] "RemoveContainer" containerID="2588192f60378ca1092182e85a2d142272639f43f1993cca86706ccb45ce9080" Jan 28 18:45:45 crc kubenswrapper[4985]: I0128 18:45:45.942520 4985 generic.go:334] "Generic (PLEG): container finished" podID="dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe" containerID="16f535ef854b9c0ece73b0832601c36f1589afcd2ce2c474cd161032d681a6ab" exitCode=0 Jan 28 18:45:45 crc kubenswrapper[4985]: I0128 18:45:45.942605 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe","Type":"ContainerDied","Data":"16f535ef854b9c0ece73b0832601c36f1589afcd2ce2c474cd161032d681a6ab"} Jan 28 18:45:46 crc kubenswrapper[4985]: I0128 18:45:46.955355 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe","Type":"ContainerStarted","Data":"c8fc90583f6fc69d68acdaee4058c687323d207d0e51813f6f54b16440681da2"} Jan 28 18:45:46 crc kubenswrapper[4985]: I0128 18:45:46.956380 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 18:45:46 crc kubenswrapper[4985]: I0128 18:45:46.991009 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.990991419 podStartE2EDuration="36.990991419s" podCreationTimestamp="2026-01-28 18:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:45:46.975576143 +0000 UTC m=+1957.802138964" watchObservedRunningTime="2026-01-28 18:45:46.990991419 +0000 UTC m=+1957.817554240" Jan 28 18:45:57 crc kubenswrapper[4985]: I0128 18:45:57.047576 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-ksczb"] Jan 28 18:45:57 crc kubenswrapper[4985]: I0128 18:45:57.060331 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-ksczb"] Jan 28 18:45:57 crc kubenswrapper[4985]: I0128 18:45:57.074763 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1abf-account-create-update-fwwhm"] Jan 28 18:45:57 crc kubenswrapper[4985]: I0128 18:45:57.085765 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1abf-account-create-update-fwwhm"] Jan 28 18:45:57 crc kubenswrapper[4985]: I0128 18:45:57.285352 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9900c5fe-8fec-452e-86cc-98d901c94329" path="/var/lib/kubelet/pods/9900c5fe-8fec-452e-86cc-98d901c94329/volumes" Jan 28 18:45:57 crc kubenswrapper[4985]: I0128 18:45:57.288242 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6004532-b8ab-4b69-9907-e7bd26c6735a" path="/var/lib/kubelet/pods/e6004532-b8ab-4b69-9907-e7bd26c6735a/volumes" Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.037736 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9qd5p"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.052453 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-z2jgs"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.068794 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3e6a-account-create-update-ktg62"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.082518 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7fd1-account-create-update-tlhk7"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.095928 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9qd5p"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.107996 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3e6a-account-create-update-ktg62"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.119491 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7fd1-account-create-update-tlhk7"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.130766 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-z2jgs"] Jan 28 18:45:59 crc kubenswrapper[4985]: I0128 18:45:59.305534 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a24a5c2-4c45-43dd-a957-253323fed4d5" path="/var/lib/kubelet/pods/1a24a5c2-4c45-43dd-a957-253323fed4d5/volumes" Jan 28 18:45:59 crc kubenswrapper[4985]: I0128 18:45:59.306955 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346cb311-0387-4c85-9827-e0091b1e6bcd" path="/var/lib/kubelet/pods/346cb311-0387-4c85-9827-e0091b1e6bcd/volumes" Jan 28 18:45:59 crc kubenswrapper[4985]: I0128 18:45:59.308855 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4adf60c6-4008-4f41-a60b-cf10db1657cf" path="/var/lib/kubelet/pods/4adf60c6-4008-4f41-a60b-cf10db1657cf/volumes" Jan 28 18:45:59 crc kubenswrapper[4985]: I0128 18:45:59.309957 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2755f3-fac4-4f0b-9afb-a449f1587d11" path="/var/lib/kubelet/pods/8c2755f3-fac4-4f0b-9afb-a449f1587d11/volumes" Jan 28 18:46:00 crc kubenswrapper[4985]: I0128 18:46:00.034014 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-kwqd8"] Jan 28 18:46:00 crc kubenswrapper[4985]: I0128 18:46:00.050458 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-53b2-account-create-update-qhkg4"] Jan 28 18:46:00 crc kubenswrapper[4985]: I0128 18:46:00.061415 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-kwqd8"] Jan 28 18:46:00 crc kubenswrapper[4985]: I0128 18:46:00.075160 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-53b2-account-create-update-qhkg4"] Jan 28 18:46:00 crc kubenswrapper[4985]: I0128 18:46:00.859644 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 18:46:01 crc kubenswrapper[4985]: I0128 18:46:01.287918 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9193a306-03fe-41ae-8b93-2851b08c73fb" path="/var/lib/kubelet/pods/9193a306-03fe-41ae-8b93-2851b08c73fb/volumes" Jan 28 18:46:01 crc kubenswrapper[4985]: I0128 18:46:01.288771 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbefdfab-0ef2-4f71-9e9c-412c4dd87886" path="/var/lib/kubelet/pods/dbefdfab-0ef2-4f71-9e9c-412c4dd87886/volumes" Jan 28 18:46:04 crc kubenswrapper[4985]: I0128 18:46:04.038981 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9sg6w"] Jan 28 18:46:04 crc kubenswrapper[4985]: I0128 18:46:04.059553 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9sg6w"] Jan 28 18:46:05 crc kubenswrapper[4985]: I0128 18:46:05.278793 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" path="/var/lib/kubelet/pods/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212/volumes" Jan 28 18:46:09 crc kubenswrapper[4985]: I0128 18:46:09.038921 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2"] Jan 28 18:46:09 crc kubenswrapper[4985]: I0128 18:46:09.051818 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-ba0b-account-create-update-56qr8"] Jan 28 18:46:09 crc kubenswrapper[4985]: I0128 18:46:09.064082 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-ba0b-account-create-update-56qr8"] Jan 28 18:46:09 crc kubenswrapper[4985]: I0128 18:46:09.074721 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2"] Jan 28 18:46:09 crc kubenswrapper[4985]: I0128 18:46:09.277870 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f6fb79-54ff-4a24-ad53-5812b6faa504" path="/var/lib/kubelet/pods/53f6fb79-54ff-4a24-ad53-5812b6faa504/volumes" Jan 28 18:46:09 crc kubenswrapper[4985]: I0128 18:46:09.278594 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c57cd6d-54d8-4d7c-863c-cfd30fab0768" path="/var/lib/kubelet/pods/8c57cd6d-54d8-4d7c-863c-cfd30fab0768/volumes" Jan 28 18:46:37 crc kubenswrapper[4985]: I0128 18:46:37.747064 4985 scope.go:117] "RemoveContainer" containerID="521672f13c59cc25ffac94ddae42298d333bbe43930229a9ebba2d7ae20a8b6d" Jan 28 18:46:37 crc kubenswrapper[4985]: I0128 18:46:37.781519 4985 scope.go:117] "RemoveContainer" containerID="3060e8923564aa30fd03bf66b3d5bcff3578ea99d0b7eb76a560b9022326b58d" Jan 28 18:46:37 crc kubenswrapper[4985]: I0128 18:46:37.857667 4985 scope.go:117] "RemoveContainer" containerID="448c9182ae2c3757a2a9e99f29042394c97a623fe1975f8bf4c1b669c7542ca8" Jan 28 18:46:37 crc kubenswrapper[4985]: I0128 18:46:37.907824 4985 scope.go:117] "RemoveContainer" containerID="a5fdb593967057491cb666085c46aac8c70a1408fffafe7d2ec91a2157ba041a" Jan 28 18:46:37 crc kubenswrapper[4985]: I0128 18:46:37.971597 4985 scope.go:117] "RemoveContainer" containerID="cecab7e544d7d4e5d190c44116d919bb9260ba70670cc5c4245efeb8c2adb050" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.049984 4985 scope.go:117] "RemoveContainer" containerID="609eafe7485b15327ad2db6af8fea1da5eeeb224da5b54e1005034d41800fc19" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.129624 4985 scope.go:117] "RemoveContainer" containerID="7b723368d435c52066b70f7b63bb7ce17848129ed979021f777f40ce02cde0ea" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.153758 4985 scope.go:117] "RemoveContainer" containerID="b2b6ff931f4d8121ddd40be80d57520170cc490944b52533c2717e3ed1e070dd" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.186353 4985 scope.go:117] "RemoveContainer" containerID="dac80678a434994386297bfe622d70833a87d9d21510a5da7f0de00c71f32e28" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.243521 4985 scope.go:117] "RemoveContainer" containerID="b5b1a4710b8858945982e3f5911ca4fd86e8a7dae739eb3659e4c396927b6955" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.313977 4985 scope.go:117] "RemoveContainer" containerID="6c205ff1c9724512d656b6452f88a456eabb29c117c2d744ca2a5dce502105d6" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.367242 4985 scope.go:117] "RemoveContainer" containerID="4fa8b90db22baa4c4faa4968579997174ae718c0a3c0ae7654d27d51dc441aa9" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.448664 4985 scope.go:117] "RemoveContainer" containerID="1f111c090d549d68eb9c893a3868b82edfed972f352a2924277825559a933734" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.480029 4985 scope.go:117] "RemoveContainer" containerID="156d97e63d4214e7b4ebce332bf5ca2efd74529bc9a0eb50a6b04fcfb1f0fcab" Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.059630 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8d89-account-create-update-8fw8c"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.075987 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-4fswm"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.088124 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-5stnz"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.103660 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2615-account-create-update-8xhkc"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.115228 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-888tv"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.128734 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-br7rn"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.144645 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-4d8b-account-create-update-hg9ms"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.157346 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-4fswm"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.168579 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2623-account-create-update-nvftp"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.178790 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2615-account-create-update-8xhkc"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.189932 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-888tv"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.200785 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2623-account-create-update-nvftp"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.213286 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8d89-account-create-update-8fw8c"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.225987 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-br7rn"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.237436 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-4d8b-account-create-update-hg9ms"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.251376 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-5stnz"] Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.282135 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7822ab-0225-4deb-a283-374e32bc995f" path="/var/lib/kubelet/pods/0a7822ab-0225-4deb-a283-374e32bc995f/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.287733 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fc487cd-a539-4daa-8c13-40d0cea82770" path="/var/lib/kubelet/pods/0fc487cd-a539-4daa-8c13-40d0cea82770/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.291838 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" path="/var/lib/kubelet/pods/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.295137 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d078ca4-34dd-4a65-a2e4-ffc23f098285" path="/var/lib/kubelet/pods/6d078ca4-34dd-4a65-a2e4-ffc23f098285/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.311177 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="768c2a33-259c-4194-ad30-8edffff92f18" path="/var/lib/kubelet/pods/768c2a33-259c-4194-ad30-8edffff92f18/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.316763 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887f886a-9541-4075-9d32-0d8feaf32722" path="/var/lib/kubelet/pods/887f886a-9541-4075-9d32-0d8feaf32722/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.319124 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c052fbc1-a102-456b-8658-c954fe91534b" path="/var/lib/kubelet/pods/c052fbc1-a102-456b-8658-c954fe91534b/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.320596 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7074267-6514-4b90-9aef-a4df05b52054" path="/var/lib/kubelet/pods/d7074267-6514-4b90-9aef-a4df05b52054/volumes" Jan 28 18:46:42 crc kubenswrapper[4985]: I0128 18:46:42.038138 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5q5qm"] Jan 28 18:46:42 crc kubenswrapper[4985]: I0128 18:46:42.051904 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5q5qm"] Jan 28 18:46:43 crc kubenswrapper[4985]: I0128 18:46:43.286214 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="229b9159-df89-4859-b5f3-d34b2759d0fd" path="/var/lib/kubelet/pods/229b9159-df89-4859-b5f3-d34b2759d0fd/volumes" Jan 28 18:46:46 crc kubenswrapper[4985]: I0128 18:46:46.028969 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-49fs2"] Jan 28 18:46:46 crc kubenswrapper[4985]: I0128 18:46:46.042405 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-49fs2"] Jan 28 18:46:47 crc kubenswrapper[4985]: I0128 18:46:47.284571 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c3b6ba3-2c25-4da1-b02f-de0e776383c1" path="/var/lib/kubelet/pods/6c3b6ba3-2c25-4da1-b02f-de0e776383c1/volumes" Jan 28 18:47:11 crc kubenswrapper[4985]: I0128 18:47:11.185999 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:47:11 crc kubenswrapper[4985]: I0128 18:47:11.186653 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.134413 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zjwln"] Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.138021 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.145376 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zjwln"] Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.309778 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-catalog-content\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.310125 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-utilities\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.310222 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cthvq\" (UniqueName: \"kubernetes.io/projected/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-kube-api-access-cthvq\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.412982 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-catalog-content\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.413047 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-utilities\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.413235 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cthvq\" (UniqueName: \"kubernetes.io/projected/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-kube-api-access-cthvq\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.413603 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-catalog-content\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.413640 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-utilities\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.456295 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cthvq\" (UniqueName: \"kubernetes.io/projected/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-kube-api-access-cthvq\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.463196 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:17 crc kubenswrapper[4985]: I0128 18:47:17.130474 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zjwln"] Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.082987 4985 generic.go:334] "Generic (PLEG): container finished" podID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerID="8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748" exitCode=0 Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.083372 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerDied","Data":"8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748"} Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.083407 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerStarted","Data":"b5edb2b86f696acde21c697dd591a86e6bb2afd0a8cb27222ce7b1cd843ebb0e"} Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.087724 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.544450 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qvjh4"] Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.548502 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.564899 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvjh4"] Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.644569 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7gsg\" (UniqueName: \"kubernetes.io/projected/a647567b-b5d7-4001-aeb7-085793d361ae-kube-api-access-j7gsg\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.644638 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-utilities\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.644671 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-catalog-content\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.747328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7gsg\" (UniqueName: \"kubernetes.io/projected/a647567b-b5d7-4001-aeb7-085793d361ae-kube-api-access-j7gsg\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.747385 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-utilities\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.747420 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-catalog-content\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.747932 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-utilities\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.747980 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-catalog-content\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.769219 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7gsg\" (UniqueName: \"kubernetes.io/projected/a647567b-b5d7-4001-aeb7-085793d361ae-kube-api-access-j7gsg\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.882878 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.130650 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6l7vb"] Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.133443 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.142827 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6l7vb"] Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.156820 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-catalog-content\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.156872 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5qc5\" (UniqueName: \"kubernetes.io/projected/13b350b8-ace5-45c9-9de3-0b4887795c48-kube-api-access-s5qc5\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.156930 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-utilities\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.259075 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-catalog-content\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.259136 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5qc5\" (UniqueName: \"kubernetes.io/projected/13b350b8-ace5-45c9-9de3-0b4887795c48-kube-api-access-s5qc5\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.259182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-utilities\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.259629 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-catalog-content\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.259637 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-utilities\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.294472 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5qc5\" (UniqueName: \"kubernetes.io/projected/13b350b8-ace5-45c9-9de3-0b4887795c48-kube-api-access-s5qc5\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.456944 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.825223 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvjh4"] Jan 28 18:47:20 crc kubenswrapper[4985]: W0128 18:47:20.011700 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13b350b8_ace5_45c9_9de3_0b4887795c48.slice/crio-04c8c4cd2d28ac7bc4fefedc58c109823619aa72f9c17124c23d39096091e962 WatchSource:0}: Error finding container 04c8c4cd2d28ac7bc4fefedc58c109823619aa72f9c17124c23d39096091e962: Status 404 returned error can't find the container with id 04c8c4cd2d28ac7bc4fefedc58c109823619aa72f9c17124c23d39096091e962 Jan 28 18:47:20 crc kubenswrapper[4985]: I0128 18:47:20.012902 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6l7vb"] Jan 28 18:47:20 crc kubenswrapper[4985]: I0128 18:47:20.114311 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerStarted","Data":"04c8c4cd2d28ac7bc4fefedc58c109823619aa72f9c17124c23d39096091e962"} Jan 28 18:47:20 crc kubenswrapper[4985]: I0128 18:47:20.116326 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerStarted","Data":"fbb3b7576bc49a07a7ed4e1638eb87bdd32c1fd17054a063d0d281a60776ca08"} Jan 28 18:47:20 crc kubenswrapper[4985]: I0128 18:47:20.119659 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerStarted","Data":"1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b"} Jan 28 18:47:21 crc kubenswrapper[4985]: I0128 18:47:21.131491 4985 generic.go:334] "Generic (PLEG): container finished" podID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerID="8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51" exitCode=0 Jan 28 18:47:21 crc kubenswrapper[4985]: I0128 18:47:21.131558 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerDied","Data":"8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51"} Jan 28 18:47:21 crc kubenswrapper[4985]: I0128 18:47:21.135689 4985 generic.go:334] "Generic (PLEG): container finished" podID="a647567b-b5d7-4001-aeb7-085793d361ae" containerID="9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2" exitCode=0 Jan 28 18:47:21 crc kubenswrapper[4985]: I0128 18:47:21.135908 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerDied","Data":"9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2"} Jan 28 18:47:23 crc kubenswrapper[4985]: I0128 18:47:23.182629 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerStarted","Data":"77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656"} Jan 28 18:47:23 crc kubenswrapper[4985]: I0128 18:47:23.185755 4985 generic.go:334] "Generic (PLEG): container finished" podID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerID="1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b" exitCode=0 Jan 28 18:47:23 crc kubenswrapper[4985]: I0128 18:47:23.185831 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerDied","Data":"1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b"} Jan 28 18:47:23 crc kubenswrapper[4985]: I0128 18:47:23.193910 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerStarted","Data":"91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5"} Jan 28 18:47:26 crc kubenswrapper[4985]: I0128 18:47:26.237225 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerStarted","Data":"01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18"} Jan 28 18:47:26 crc kubenswrapper[4985]: I0128 18:47:26.273040 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zjwln" podStartSLOduration=3.470990271 podStartE2EDuration="10.273016429s" podCreationTimestamp="2026-01-28 18:47:16 +0000 UTC" firstStartedPulling="2026-01-28 18:47:18.085375754 +0000 UTC m=+2048.911938595" lastFinishedPulling="2026-01-28 18:47:24.887401932 +0000 UTC m=+2055.713964753" observedRunningTime="2026-01-28 18:47:26.257559701 +0000 UTC m=+2057.084122512" watchObservedRunningTime="2026-01-28 18:47:26.273016429 +0000 UTC m=+2057.099579260" Jan 28 18:47:26 crc kubenswrapper[4985]: I0128 18:47:26.463452 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:26 crc kubenswrapper[4985]: I0128 18:47:26.463590 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:27 crc kubenswrapper[4985]: I0128 18:47:27.518320 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zjwln" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="registry-server" probeResult="failure" output=< Jan 28 18:47:27 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:47:27 crc kubenswrapper[4985]: > Jan 28 18:47:28 crc kubenswrapper[4985]: I0128 18:47:28.260959 4985 generic.go:334] "Generic (PLEG): container finished" podID="a647567b-b5d7-4001-aeb7-085793d361ae" containerID="77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656" exitCode=0 Jan 28 18:47:28 crc kubenswrapper[4985]: I0128 18:47:28.261003 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerDied","Data":"77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656"} Jan 28 18:47:30 crc kubenswrapper[4985]: I0128 18:47:30.288707 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerStarted","Data":"7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06"} Jan 28 18:47:30 crc kubenswrapper[4985]: I0128 18:47:30.315149 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qvjh4" podStartSLOduration=4.000411963 podStartE2EDuration="12.315127022s" podCreationTimestamp="2026-01-28 18:47:18 +0000 UTC" firstStartedPulling="2026-01-28 18:47:21.140882459 +0000 UTC m=+2051.967445300" lastFinishedPulling="2026-01-28 18:47:29.455597538 +0000 UTC m=+2060.282160359" observedRunningTime="2026-01-28 18:47:30.306099436 +0000 UTC m=+2061.132662257" watchObservedRunningTime="2026-01-28 18:47:30.315127022 +0000 UTC m=+2061.141689853" Jan 28 18:47:34 crc kubenswrapper[4985]: I0128 18:47:34.332146 4985 generic.go:334] "Generic (PLEG): container finished" podID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerID="91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5" exitCode=0 Jan 28 18:47:34 crc kubenswrapper[4985]: I0128 18:47:34.332226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerDied","Data":"91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5"} Jan 28 18:47:35 crc kubenswrapper[4985]: I0128 18:47:35.354273 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerStarted","Data":"de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae"} Jan 28 18:47:35 crc kubenswrapper[4985]: I0128 18:47:35.386796 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6l7vb" podStartSLOduration=2.669651987 podStartE2EDuration="16.38677753s" podCreationTimestamp="2026-01-28 18:47:19 +0000 UTC" firstStartedPulling="2026-01-28 18:47:21.133824329 +0000 UTC m=+2051.960387150" lastFinishedPulling="2026-01-28 18:47:34.850949872 +0000 UTC m=+2065.677512693" observedRunningTime="2026-01-28 18:47:35.373990938 +0000 UTC m=+2066.200553779" watchObservedRunningTime="2026-01-28 18:47:35.38677753 +0000 UTC m=+2066.213340351" Jan 28 18:47:36 crc kubenswrapper[4985]: I0128 18:47:36.367430 4985 generic.go:334] "Generic (PLEG): container finished" podID="3865f1db-f707-4b28-bbf2-8ce1975baa1f" containerID="bc9afc05871aa23d4c3db1d4e88d2efe8c3615724cb67da049ef34770cd610ef" exitCode=0 Jan 28 18:47:36 crc kubenswrapper[4985]: I0128 18:47:36.367498 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" event={"ID":"3865f1db-f707-4b28-bbf2-8ce1975baa1f","Type":"ContainerDied","Data":"bc9afc05871aa23d4c3db1d4e88d2efe8c3615724cb67da049ef34770cd610ef"} Jan 28 18:47:36 crc kubenswrapper[4985]: I0128 18:47:36.520796 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:36 crc kubenswrapper[4985]: I0128 18:47:36.590371 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:36 crc kubenswrapper[4985]: I0128 18:47:36.761155 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zjwln"] Jan 28 18:47:37 crc kubenswrapper[4985]: I0128 18:47:37.942572 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.050597 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-inventory\") pod \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.050657 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r6ll\" (UniqueName: \"kubernetes.io/projected/3865f1db-f707-4b28-bbf2-8ce1975baa1f-kube-api-access-4r6ll\") pod \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.050712 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-bootstrap-combined-ca-bundle\") pod \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.051644 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-ssh-key-openstack-edpm-ipam\") pod \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.065342 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3865f1db-f707-4b28-bbf2-8ce1975baa1f" (UID: "3865f1db-f707-4b28-bbf2-8ce1975baa1f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.069149 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3865f1db-f707-4b28-bbf2-8ce1975baa1f-kube-api-access-4r6ll" (OuterVolumeSpecName: "kube-api-access-4r6ll") pod "3865f1db-f707-4b28-bbf2-8ce1975baa1f" (UID: "3865f1db-f707-4b28-bbf2-8ce1975baa1f"). InnerVolumeSpecName "kube-api-access-4r6ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.089025 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3865f1db-f707-4b28-bbf2-8ce1975baa1f" (UID: "3865f1db-f707-4b28-bbf2-8ce1975baa1f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.089967 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-inventory" (OuterVolumeSpecName: "inventory") pod "3865f1db-f707-4b28-bbf2-8ce1975baa1f" (UID: "3865f1db-f707-4b28-bbf2-8ce1975baa1f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.155035 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.155075 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.155088 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r6ll\" (UniqueName: \"kubernetes.io/projected/3865f1db-f707-4b28-bbf2-8ce1975baa1f-kube-api-access-4r6ll\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.155107 4985 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.390041 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" event={"ID":"3865f1db-f707-4b28-bbf2-8ce1975baa1f","Type":"ContainerDied","Data":"1bfb1cd976d4fbd706984e82e00454ee0234df3e9f729b27a0e1988a842cf66b"} Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.390978 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bfb1cd976d4fbd706984e82e00454ee0234df3e9f729b27a0e1988a842cf66b" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.390299 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zjwln" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="registry-server" containerID="cri-o://01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18" gracePeriod=2 Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.390061 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.490239 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l"] Jan 28 18:47:38 crc kubenswrapper[4985]: E0128 18:47:38.490777 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3865f1db-f707-4b28-bbf2-8ce1975baa1f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.490800 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3865f1db-f707-4b28-bbf2-8ce1975baa1f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.491111 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3865f1db-f707-4b28-bbf2-8ce1975baa1f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.492189 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.495365 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.495548 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.495463 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.496108 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.511507 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l"] Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.666550 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.666744 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.666800 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzgkt\" (UniqueName: \"kubernetes.io/projected/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-kube-api-access-zzgkt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.769636 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.770156 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.770226 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzgkt\" (UniqueName: \"kubernetes.io/projected/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-kube-api-access-zzgkt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.775793 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.775821 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.788609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzgkt\" (UniqueName: \"kubernetes.io/projected/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-kube-api-access-zzgkt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.826133 4985 scope.go:117] "RemoveContainer" containerID="8d83ae610aea076db41903e479372673c489635bc359f8ba503ad92865568b4d" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.881702 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.883090 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.883194 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.953893 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.050559 4985 scope.go:117] "RemoveContainer" containerID="6f81b27fc2e7a5ce52780bd694a1d7b0af6de17e38f2a816f35448cc2f8e93b0" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.078891 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.105433 4985 scope.go:117] "RemoveContainer" containerID="82ff15708c7feba4b50bfae36f824c144bddeb2ec8ddc05a588aede4034d1eb1" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.163002 4985 scope.go:117] "RemoveContainer" containerID="92ba33b439db2a5df5ff34914eff515d7a059caada35a79afe448a92f1201c1e" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.180069 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cthvq\" (UniqueName: \"kubernetes.io/projected/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-kube-api-access-cthvq\") pod \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.180236 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-utilities\") pod \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.180298 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-catalog-content\") pod \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.181647 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-utilities" (OuterVolumeSpecName: "utilities") pod "4ccb0c01-9886-4215-b63d-a0fdcc81a25c" (UID: "4ccb0c01-9886-4215-b63d-a0fdcc81a25c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.182491 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.186660 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-kube-api-access-cthvq" (OuterVolumeSpecName: "kube-api-access-cthvq") pod "4ccb0c01-9886-4215-b63d-a0fdcc81a25c" (UID: "4ccb0c01-9886-4215-b63d-a0fdcc81a25c"). InnerVolumeSpecName "kube-api-access-cthvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.207820 4985 scope.go:117] "RemoveContainer" containerID="ef6310844d9eb58852520a7287dfca2d3780f36ea565d58fea9a7e00a7b9506b" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.238537 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ccb0c01-9886-4215-b63d-a0fdcc81a25c" (UID: "4ccb0c01-9886-4215-b63d-a0fdcc81a25c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.257991 4985 scope.go:117] "RemoveContainer" containerID="f7f9efcfdd23e8d8635c4c036c55b162db6c57b666261780d55e532d672c4438" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.284941 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.284979 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cthvq\" (UniqueName: \"kubernetes.io/projected/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-kube-api-access-cthvq\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.287645 4985 scope.go:117] "RemoveContainer" containerID="62b40fcabf6fa0fa3594d971ef20837ab76d19a05ef888b27ef59e8e216c6b43" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.315909 4985 scope.go:117] "RemoveContainer" containerID="0ab08bac76909d1e142ea94f2076118980c9731dca96c80e8289000d98f0d6ce" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.344425 4985 scope.go:117] "RemoveContainer" containerID="fc0b5d4f8a27e5da50b50ceabdadd101d74be078c6014be172f85e01027bd9af" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.385827 4985 scope.go:117] "RemoveContainer" containerID="d394f63865046e3bed1c13acb76b2d5b90327e2b0f8a9073a210a53855ab1204" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.438793 4985 generic.go:334] "Generic (PLEG): container finished" podID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerID="01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18" exitCode=0 Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.438875 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerDied","Data":"01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18"} Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.438921 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.438936 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerDied","Data":"b5edb2b86f696acde21c697dd591a86e6bb2afd0a8cb27222ce7b1cd843ebb0e"} Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.438956 4985 scope.go:117] "RemoveContainer" containerID="01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.459617 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.459659 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.481484 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zjwln"] Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.493547 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zjwln"] Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.517895 4985 scope.go:117] "RemoveContainer" containerID="1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.524959 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.530148 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l"] Jan 28 18:47:39 crc kubenswrapper[4985]: W0128 18:47:39.538697 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbfc48e7_8a35_4fc6_b9fd_0c1735864116.slice/crio-3815895e125b2d993294d08b3a66a4e5ca54790173a42226945d76a4521c3e56 WatchSource:0}: Error finding container 3815895e125b2d993294d08b3a66a4e5ca54790173a42226945d76a4521c3e56: Status 404 returned error can't find the container with id 3815895e125b2d993294d08b3a66a4e5ca54790173a42226945d76a4521c3e56 Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.578346 4985 scope.go:117] "RemoveContainer" containerID="8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.623810 4985 scope.go:117] "RemoveContainer" containerID="01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18" Jan 28 18:47:39 crc kubenswrapper[4985]: E0128 18:47:39.624310 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18\": container with ID starting with 01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18 not found: ID does not exist" containerID="01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.624347 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18"} err="failed to get container status \"01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18\": rpc error: code = NotFound desc = could not find container \"01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18\": container with ID starting with 01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18 not found: ID does not exist" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.624369 4985 scope.go:117] "RemoveContainer" containerID="1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b" Jan 28 18:47:39 crc kubenswrapper[4985]: E0128 18:47:39.624684 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b\": container with ID starting with 1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b not found: ID does not exist" containerID="1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.624720 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b"} err="failed to get container status \"1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b\": rpc error: code = NotFound desc = could not find container \"1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b\": container with ID starting with 1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b not found: ID does not exist" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.624744 4985 scope.go:117] "RemoveContainer" containerID="8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748" Jan 28 18:47:39 crc kubenswrapper[4985]: E0128 18:47:39.624951 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748\": container with ID starting with 8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748 not found: ID does not exist" containerID="8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.624980 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748"} err="failed to get container status \"8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748\": rpc error: code = NotFound desc = could not find container \"8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748\": container with ID starting with 8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748 not found: ID does not exist" Jan 28 18:47:40 crc kubenswrapper[4985]: I0128 18:47:40.459010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" event={"ID":"fbfc48e7-8a35-4fc6-b9fd-0c1735864116","Type":"ContainerStarted","Data":"24ae801d110a2ccea339ddd0d6272cdb220439bc5457fb577978b735b741f7fc"} Jan 28 18:47:40 crc kubenswrapper[4985]: I0128 18:47:40.459293 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" event={"ID":"fbfc48e7-8a35-4fc6-b9fd-0c1735864116","Type":"ContainerStarted","Data":"3815895e125b2d993294d08b3a66a4e5ca54790173a42226945d76a4521c3e56"} Jan 28 18:47:40 crc kubenswrapper[4985]: I0128 18:47:40.487615 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" podStartSLOduration=2.020700396 podStartE2EDuration="2.487594124s" podCreationTimestamp="2026-01-28 18:47:38 +0000 UTC" firstStartedPulling="2026-01-28 18:47:39.547184521 +0000 UTC m=+2070.373747342" lastFinishedPulling="2026-01-28 18:47:40.014078249 +0000 UTC m=+2070.840641070" observedRunningTime="2026-01-28 18:47:40.477124107 +0000 UTC m=+2071.303686928" watchObservedRunningTime="2026-01-28 18:47:40.487594124 +0000 UTC m=+2071.314156945" Jan 28 18:47:40 crc kubenswrapper[4985]: I0128 18:47:40.510448 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6l7vb" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="registry-server" probeResult="failure" output=< Jan 28 18:47:40 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:47:40 crc kubenswrapper[4985]: > Jan 28 18:47:41 crc kubenswrapper[4985]: I0128 18:47:41.185730 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:47:41 crc kubenswrapper[4985]: I0128 18:47:41.186140 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:47:41 crc kubenswrapper[4985]: I0128 18:47:41.277439 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" path="/var/lib/kubelet/pods/4ccb0c01-9886-4215-b63d-a0fdcc81a25c/volumes" Jan 28 18:47:41 crc kubenswrapper[4985]: I0128 18:47:41.763794 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvjh4"] Jan 28 18:47:42 crc kubenswrapper[4985]: I0128 18:47:42.482636 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qvjh4" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="registry-server" containerID="cri-o://7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06" gracePeriod=2 Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.066859 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.187151 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-catalog-content\") pod \"a647567b-b5d7-4001-aeb7-085793d361ae\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.187380 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7gsg\" (UniqueName: \"kubernetes.io/projected/a647567b-b5d7-4001-aeb7-085793d361ae-kube-api-access-j7gsg\") pod \"a647567b-b5d7-4001-aeb7-085793d361ae\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.187481 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-utilities\") pod \"a647567b-b5d7-4001-aeb7-085793d361ae\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.189019 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-utilities" (OuterVolumeSpecName: "utilities") pod "a647567b-b5d7-4001-aeb7-085793d361ae" (UID: "a647567b-b5d7-4001-aeb7-085793d361ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.193697 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a647567b-b5d7-4001-aeb7-085793d361ae-kube-api-access-j7gsg" (OuterVolumeSpecName: "kube-api-access-j7gsg") pod "a647567b-b5d7-4001-aeb7-085793d361ae" (UID: "a647567b-b5d7-4001-aeb7-085793d361ae"). InnerVolumeSpecName "kube-api-access-j7gsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.211352 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a647567b-b5d7-4001-aeb7-085793d361ae" (UID: "a647567b-b5d7-4001-aeb7-085793d361ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.291000 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.291031 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.291044 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7gsg\" (UniqueName: \"kubernetes.io/projected/a647567b-b5d7-4001-aeb7-085793d361ae-kube-api-access-j7gsg\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.495302 4985 generic.go:334] "Generic (PLEG): container finished" podID="a647567b-b5d7-4001-aeb7-085793d361ae" containerID="7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06" exitCode=0 Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.495372 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerDied","Data":"7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06"} Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.495637 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerDied","Data":"fbb3b7576bc49a07a7ed4e1638eb87bdd32c1fd17054a063d0d281a60776ca08"} Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.495661 4985 scope.go:117] "RemoveContainer" containerID="7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.495405 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.519954 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvjh4"] Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.531921 4985 scope.go:117] "RemoveContainer" containerID="77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.534562 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvjh4"] Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.552770 4985 scope.go:117] "RemoveContainer" containerID="9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.623597 4985 scope.go:117] "RemoveContainer" containerID="7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06" Jan 28 18:47:43 crc kubenswrapper[4985]: E0128 18:47:43.624092 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06\": container with ID starting with 7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06 not found: ID does not exist" containerID="7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.624143 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06"} err="failed to get container status \"7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06\": rpc error: code = NotFound desc = could not find container \"7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06\": container with ID starting with 7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06 not found: ID does not exist" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.624172 4985 scope.go:117] "RemoveContainer" containerID="77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656" Jan 28 18:47:43 crc kubenswrapper[4985]: E0128 18:47:43.625296 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656\": container with ID starting with 77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656 not found: ID does not exist" containerID="77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.625327 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656"} err="failed to get container status \"77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656\": rpc error: code = NotFound desc = could not find container \"77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656\": container with ID starting with 77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656 not found: ID does not exist" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.625350 4985 scope.go:117] "RemoveContainer" containerID="9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2" Jan 28 18:47:43 crc kubenswrapper[4985]: E0128 18:47:43.625646 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2\": container with ID starting with 9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2 not found: ID does not exist" containerID="9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.625671 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2"} err="failed to get container status \"9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2\": rpc error: code = NotFound desc = could not find container \"9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2\": container with ID starting with 9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2 not found: ID does not exist" Jan 28 18:47:45 crc kubenswrapper[4985]: I0128 18:47:45.278115 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" path="/var/lib/kubelet/pods/a647567b-b5d7-4001-aeb7-085793d361ae/volumes" Jan 28 18:47:47 crc kubenswrapper[4985]: I0128 18:47:47.061011 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8h4kr"] Jan 28 18:47:47 crc kubenswrapper[4985]: I0128 18:47:47.112801 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hlgnm"] Jan 28 18:47:47 crc kubenswrapper[4985]: I0128 18:47:47.131845 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hlgnm"] Jan 28 18:47:47 crc kubenswrapper[4985]: I0128 18:47:47.148228 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8h4kr"] Jan 28 18:47:47 crc kubenswrapper[4985]: I0128 18:47:47.280075 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a3199c2-6b1c-4a07-849d-cc92d372c5c3" path="/var/lib/kubelet/pods/4a3199c2-6b1c-4a07-849d-cc92d372c5c3/volumes" Jan 28 18:47:47 crc kubenswrapper[4985]: I0128 18:47:47.283652 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f788adab-3912-43da-869e-2450d65b761f" path="/var/lib/kubelet/pods/f788adab-3912-43da-869e-2450d65b761f/volumes" Jan 28 18:47:48 crc kubenswrapper[4985]: I0128 18:47:48.029764 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-9w9wm"] Jan 28 18:47:48 crc kubenswrapper[4985]: I0128 18:47:48.043727 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-9w9wm"] Jan 28 18:47:49 crc kubenswrapper[4985]: I0128 18:47:49.277701 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba5eedf-14b8-45ce-b738-e41a6daff299" path="/var/lib/kubelet/pods/2ba5eedf-14b8-45ce-b738-e41a6daff299/volumes" Jan 28 18:47:49 crc kubenswrapper[4985]: I0128 18:47:49.518781 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:49 crc kubenswrapper[4985]: I0128 18:47:49.587065 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:50 crc kubenswrapper[4985]: I0128 18:47:50.731417 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6l7vb"] Jan 28 18:47:50 crc kubenswrapper[4985]: I0128 18:47:50.731973 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6l7vb" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="registry-server" containerID="cri-o://de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae" gracePeriod=2 Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.337525 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.504440 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-utilities\") pod \"13b350b8-ace5-45c9-9de3-0b4887795c48\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.504585 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-catalog-content\") pod \"13b350b8-ace5-45c9-9de3-0b4887795c48\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.504717 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5qc5\" (UniqueName: \"kubernetes.io/projected/13b350b8-ace5-45c9-9de3-0b4887795c48-kube-api-access-s5qc5\") pod \"13b350b8-ace5-45c9-9de3-0b4887795c48\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.505648 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-utilities" (OuterVolumeSpecName: "utilities") pod "13b350b8-ace5-45c9-9de3-0b4887795c48" (UID: "13b350b8-ace5-45c9-9de3-0b4887795c48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.518520 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b350b8-ace5-45c9-9de3-0b4887795c48-kube-api-access-s5qc5" (OuterVolumeSpecName: "kube-api-access-s5qc5") pod "13b350b8-ace5-45c9-9de3-0b4887795c48" (UID: "13b350b8-ace5-45c9-9de3-0b4887795c48"). InnerVolumeSpecName "kube-api-access-s5qc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.582904 4985 generic.go:334] "Generic (PLEG): container finished" podID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerID="de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae" exitCode=0 Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.582958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerDied","Data":"de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae"} Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.583013 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerDied","Data":"04c8c4cd2d28ac7bc4fefedc58c109823619aa72f9c17124c23d39096091e962"} Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.583053 4985 scope.go:117] "RemoveContainer" containerID="de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.583075 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.608226 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5qc5\" (UniqueName: \"kubernetes.io/projected/13b350b8-ace5-45c9-9de3-0b4887795c48-kube-api-access-s5qc5\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.608291 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.617633 4985 scope.go:117] "RemoveContainer" containerID="91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.640206 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13b350b8-ace5-45c9-9de3-0b4887795c48" (UID: "13b350b8-ace5-45c9-9de3-0b4887795c48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.687630 4985 scope.go:117] "RemoveContainer" containerID="8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.712215 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.721545 4985 scope.go:117] "RemoveContainer" containerID="de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae" Jan 28 18:47:51 crc kubenswrapper[4985]: E0128 18:47:51.722384 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae\": container with ID starting with de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae not found: ID does not exist" containerID="de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.722456 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae"} err="failed to get container status \"de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae\": rpc error: code = NotFound desc = could not find container \"de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae\": container with ID starting with de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae not found: ID does not exist" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.722499 4985 scope.go:117] "RemoveContainer" containerID="91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5" Jan 28 18:47:51 crc kubenswrapper[4985]: E0128 18:47:51.723126 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5\": container with ID starting with 91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5 not found: ID does not exist" containerID="91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.723185 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5"} err="failed to get container status \"91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5\": rpc error: code = NotFound desc = could not find container \"91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5\": container with ID starting with 91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5 not found: ID does not exist" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.723225 4985 scope.go:117] "RemoveContainer" containerID="8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51" Jan 28 18:47:51 crc kubenswrapper[4985]: E0128 18:47:51.723935 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51\": container with ID starting with 8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51 not found: ID does not exist" containerID="8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.723971 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51"} err="failed to get container status \"8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51\": rpc error: code = NotFound desc = could not find container \"8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51\": container with ID starting with 8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51 not found: ID does not exist" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.941168 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6l7vb"] Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.953176 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6l7vb"] Jan 28 18:47:53 crc kubenswrapper[4985]: I0128 18:47:53.276199 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" path="/var/lib/kubelet/pods/13b350b8-ace5-45c9-9de3-0b4887795c48/volumes" Jan 28 18:48:08 crc kubenswrapper[4985]: I0128 18:48:08.087383 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-s8hs9"] Jan 28 18:48:08 crc kubenswrapper[4985]: I0128 18:48:08.103688 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-dwwcb"] Jan 28 18:48:08 crc kubenswrapper[4985]: I0128 18:48:08.118296 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-s8hs9"] Jan 28 18:48:08 crc kubenswrapper[4985]: I0128 18:48:08.131281 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-dwwcb"] Jan 28 18:48:09 crc kubenswrapper[4985]: I0128 18:48:09.278524 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b64f0d6c-55b7-4eac-85f6-e78b581cbebc" path="/var/lib/kubelet/pods/b64f0d6c-55b7-4eac-85f6-e78b581cbebc/volumes" Jan 28 18:48:09 crc kubenswrapper[4985]: I0128 18:48:09.279812 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" path="/var/lib/kubelet/pods/feecd29d-1d64-47f4-a1af-e634b7d87f3a/volumes" Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.185725 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.186085 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.186136 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.187112 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b39401ce5f91585a2b4b22e75d0e797d75465500360ec9051ef07c933730fe87"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.187174 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://b39401ce5f91585a2b4b22e75d0e797d75465500360ec9051ef07c933730fe87" gracePeriod=600 Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.802043 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="b39401ce5f91585a2b4b22e75d0e797d75465500360ec9051ef07c933730fe87" exitCode=0 Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.802379 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"b39401ce5f91585a2b4b22e75d0e797d75465500360ec9051ef07c933730fe87"} Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.802407 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573"} Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.802427 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:48:39 crc kubenswrapper[4985]: I0128 18:48:39.732789 4985 scope.go:117] "RemoveContainer" containerID="badce37bfe68dc4bcc676f7b0c786e9f03574bc7e99b889419d42e1d88e90514" Jan 28 18:48:39 crc kubenswrapper[4985]: I0128 18:48:39.784335 4985 scope.go:117] "RemoveContainer" containerID="bf3748442896f3bbadb859f2d03e272740c521c498e8208b7d4bed6a247a0dd0" Jan 28 18:48:39 crc kubenswrapper[4985]: I0128 18:48:39.867810 4985 scope.go:117] "RemoveContainer" containerID="461350d6795ff69f1fd203af637d4dd96dfc2a84c72f138630ab057e524c2df1" Jan 28 18:48:39 crc kubenswrapper[4985]: I0128 18:48:39.903478 4985 scope.go:117] "RemoveContainer" containerID="38e38c87534fe5e2e6e7da069589b30c70844285bffd29f51db0ab1e32c6ef5c" Jan 28 18:48:39 crc kubenswrapper[4985]: I0128 18:48:39.979343 4985 scope.go:117] "RemoveContainer" containerID="ff21852bdb082ecfb847ad06c015a8a45e3369552ad08ad1a4b52a4cb479bc06" Jan 28 18:49:11 crc kubenswrapper[4985]: I0128 18:49:11.047131 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mzbqq"] Jan 28 18:49:11 crc kubenswrapper[4985]: I0128 18:49:11.058863 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-tq8xx"] Jan 28 18:49:11 crc kubenswrapper[4985]: I0128 18:49:11.069372 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mzbqq"] Jan 28 18:49:11 crc kubenswrapper[4985]: I0128 18:49:11.078485 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-tq8xx"] Jan 28 18:49:11 crc kubenswrapper[4985]: I0128 18:49:11.299775 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52f84c63-5719-4c32-bbc7-d7960fe35d35" path="/var/lib/kubelet/pods/52f84c63-5719-4c32-bbc7-d7960fe35d35/volumes" Jan 28 18:49:11 crc kubenswrapper[4985]: I0128 18:49:11.335113 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc09e699-e5ce-4e02-b3ae-ce43d120e70d" path="/var/lib/kubelet/pods/dc09e699-e5ce-4e02-b3ae-ce43d120e70d/volumes" Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.048219 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f01b-account-create-update-b985r"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.064675 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b80b-account-create-update-mrvzq"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.078285 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7b9a-account-create-update-l47bt"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.089044 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jqvzw"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.100623 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f01b-account-create-update-b985r"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.112950 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b80b-account-create-update-mrvzq"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.123982 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7b9a-account-create-update-l47bt"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.140615 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jqvzw"] Jan 28 18:49:13 crc kubenswrapper[4985]: I0128 18:49:13.278728 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" path="/var/lib/kubelet/pods/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae/volumes" Jan 28 18:49:13 crc kubenswrapper[4985]: I0128 18:49:13.281153 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75ac3925-bebe-4c63-999f-073386005723" path="/var/lib/kubelet/pods/75ac3925-bebe-4c63-999f-073386005723/volumes" Jan 28 18:49:13 crc kubenswrapper[4985]: I0128 18:49:13.282061 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4efe2ca-1bc9-40db-944e-fb86222e4f98" path="/var/lib/kubelet/pods/b4efe2ca-1bc9-40db-944e-fb86222e4f98/volumes" Jan 28 18:49:13 crc kubenswrapper[4985]: I0128 18:49:13.282893 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc08dbb5-2423-4fe9-8c21-a668459cad74" path="/var/lib/kubelet/pods/dc08dbb5-2423-4fe9-8c21-a668459cad74/volumes" Jan 28 18:49:40 crc kubenswrapper[4985]: I0128 18:49:40.249329 4985 scope.go:117] "RemoveContainer" containerID="4bc3d7f5e4e6dada67f4a141ee7828a9a6e0f2e232ee13a2c55fb56665c8dcf7" Jan 28 18:49:40 crc kubenswrapper[4985]: I0128 18:49:40.278586 4985 scope.go:117] "RemoveContainer" containerID="c45d2c9f516bceabb6c91c348f68e974205ef1034563c42f6346b513ae9f2b4e" Jan 28 18:49:40 crc kubenswrapper[4985]: I0128 18:49:40.349673 4985 scope.go:117] "RemoveContainer" containerID="d941727c28e1382267609d1ceda76e73a9f2d9cd3d596bc04e5cda672a1166cb" Jan 28 18:49:40 crc kubenswrapper[4985]: I0128 18:49:40.423200 4985 scope.go:117] "RemoveContainer" containerID="93175a518881e892d15535448d5c38da897596006be51be39132a6908ffae666" Jan 28 18:49:40 crc kubenswrapper[4985]: I0128 18:49:40.534129 4985 scope.go:117] "RemoveContainer" containerID="c2b4778aba3ad4aab0ffc010a57b2670dae7de8ea4b986e78468cc76f9181467" Jan 28 18:49:40 crc kubenswrapper[4985]: I0128 18:49:40.600627 4985 scope.go:117] "RemoveContainer" containerID="6a970a7bb0cf6a6924c094b8183cf37c24dab48878e09e30bf62063b33da4241" Jan 28 18:49:51 crc kubenswrapper[4985]: I0128 18:49:51.044097 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wnljz"] Jan 28 18:49:51 crc kubenswrapper[4985]: I0128 18:49:51.055524 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wnljz"] Jan 28 18:49:51 crc kubenswrapper[4985]: I0128 18:49:51.277066 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df5e9657-f657-4f0e-9d46-31c6942e70d2" path="/var/lib/kubelet/pods/df5e9657-f657-4f0e-9d46-31c6942e70d2/volumes" Jan 28 18:49:51 crc kubenswrapper[4985]: I0128 18:49:51.958445 4985 generic.go:334] "Generic (PLEG): container finished" podID="fbfc48e7-8a35-4fc6-b9fd-0c1735864116" containerID="24ae801d110a2ccea339ddd0d6272cdb220439bc5457fb577978b735b741f7fc" exitCode=0 Jan 28 18:49:51 crc kubenswrapper[4985]: I0128 18:49:51.958537 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" event={"ID":"fbfc48e7-8a35-4fc6-b9fd-0c1735864116","Type":"ContainerDied","Data":"24ae801d110a2ccea339ddd0d6272cdb220439bc5457fb577978b735b741f7fc"} Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.540052 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.691188 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-ssh-key-openstack-edpm-ipam\") pod \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.691669 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-inventory\") pod \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.691767 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzgkt\" (UniqueName: \"kubernetes.io/projected/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-kube-api-access-zzgkt\") pod \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.697413 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-kube-api-access-zzgkt" (OuterVolumeSpecName: "kube-api-access-zzgkt") pod "fbfc48e7-8a35-4fc6-b9fd-0c1735864116" (UID: "fbfc48e7-8a35-4fc6-b9fd-0c1735864116"). InnerVolumeSpecName "kube-api-access-zzgkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.730157 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-inventory" (OuterVolumeSpecName: "inventory") pod "fbfc48e7-8a35-4fc6-b9fd-0c1735864116" (UID: "fbfc48e7-8a35-4fc6-b9fd-0c1735864116"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.734907 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fbfc48e7-8a35-4fc6-b9fd-0c1735864116" (UID: "fbfc48e7-8a35-4fc6-b9fd-0c1735864116"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.794783 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.794826 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.794836 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzgkt\" (UniqueName: \"kubernetes.io/projected/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-kube-api-access-zzgkt\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.982472 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" event={"ID":"fbfc48e7-8a35-4fc6-b9fd-0c1735864116","Type":"ContainerDied","Data":"3815895e125b2d993294d08b3a66a4e5ca54790173a42226945d76a4521c3e56"} Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.982517 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3815895e125b2d993294d08b3a66a4e5ca54790173a42226945d76a4521c3e56" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.982551 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.108718 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn"] Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109341 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="extract-content" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109359 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="extract-content" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109388 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="extract-content" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109397 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="extract-content" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109409 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="extract-content" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109417 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="extract-content" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109432 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfc48e7-8a35-4fc6-b9fd-0c1735864116" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109441 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfc48e7-8a35-4fc6-b9fd-0c1735864116" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109457 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="extract-utilities" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109465 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="extract-utilities" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109479 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109486 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109502 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="extract-utilities" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109512 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="extract-utilities" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109530 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109538 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109555 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109560 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109581 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="extract-utilities" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109587 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="extract-utilities" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109872 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109911 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109932 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109946 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbfc48e7-8a35-4fc6-b9fd-0c1735864116" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.111057 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.115666 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.115856 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.115874 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.120539 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.128717 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn"] Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.204461 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.204563 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.204737 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l528l\" (UniqueName: \"kubernetes.io/projected/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-kube-api-access-l528l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.307419 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l528l\" (UniqueName: \"kubernetes.io/projected/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-kube-api-access-l528l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.307618 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.307659 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.311604 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.314832 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.324942 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l528l\" (UniqueName: \"kubernetes.io/projected/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-kube-api-access-l528l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.433730 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.992541 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn"] Jan 28 18:49:56 crc kubenswrapper[4985]: I0128 18:49:56.006615 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" event={"ID":"ed5a5127-7214-4f45-bda0-a1c6ecbaaede","Type":"ContainerStarted","Data":"c9f2f497bdfc010d8b6ae9a2d144192486869cb9cba3b65990bd74b61e389db6"} Jan 28 18:49:56 crc kubenswrapper[4985]: I0128 18:49:56.007114 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" event={"ID":"ed5a5127-7214-4f45-bda0-a1c6ecbaaede","Type":"ContainerStarted","Data":"633bd975811338a8dd128feac23d6ada0361cd583588d9bc8c1c9bc2d16bbffc"} Jan 28 18:49:56 crc kubenswrapper[4985]: I0128 18:49:56.024635 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" podStartSLOduration=1.5969992880000001 podStartE2EDuration="2.024604424s" podCreationTimestamp="2026-01-28 18:49:54 +0000 UTC" firstStartedPulling="2026-01-28 18:49:55.002633994 +0000 UTC m=+2205.829196815" lastFinishedPulling="2026-01-28 18:49:55.43023913 +0000 UTC m=+2206.256801951" observedRunningTime="2026-01-28 18:49:56.021138916 +0000 UTC m=+2206.847701737" watchObservedRunningTime="2026-01-28 18:49:56.024604424 +0000 UTC m=+2206.851167245" Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.049265 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-682b-account-create-update-fphsf"] Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.060715 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-jdztq"] Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.070852 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-682b-account-create-update-fphsf"] Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.080901 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-jdztq"] Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.185886 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.185971 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.284391 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" path="/var/lib/kubelet/pods/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5/volumes" Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.288408 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2578b35-7408-46ed-bcee-8b0ff114cd33" path="/var/lib/kubelet/pods/c2578b35-7408-46ed-bcee-8b0ff114cd33/volumes" Jan 28 18:50:15 crc kubenswrapper[4985]: I0128 18:50:15.041609 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-m82mm"] Jan 28 18:50:15 crc kubenswrapper[4985]: I0128 18:50:15.053840 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-m82mm"] Jan 28 18:50:15 crc kubenswrapper[4985]: I0128 18:50:15.280365 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e43739-91f4-43c9-9b01-5f0574a3b150" path="/var/lib/kubelet/pods/14e43739-91f4-43c9-9b01-5f0574a3b150/volumes" Jan 28 18:50:21 crc kubenswrapper[4985]: I0128 18:50:21.073338 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rxz6k"] Jan 28 18:50:21 crc kubenswrapper[4985]: I0128 18:50:21.088896 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rxz6k"] Jan 28 18:50:21 crc kubenswrapper[4985]: I0128 18:50:21.279571 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc545ce7-58a7-4757-8eab-8b0a28570a49" path="/var/lib/kubelet/pods/dc545ce7-58a7-4757-8eab-8b0a28570a49/volumes" Jan 28 18:50:40 crc kubenswrapper[4985]: I0128 18:50:40.802310 4985 scope.go:117] "RemoveContainer" containerID="c83af2ab400014fc785ba01cb5de51bf84a3ea8da54f74af11e2f8a7b4d8bbce" Jan 28 18:50:40 crc kubenswrapper[4985]: I0128 18:50:40.841050 4985 scope.go:117] "RemoveContainer" containerID="5fa6b37534633df411a4bdc3fa77962a9df43667fb32532c9621de45df63d178" Jan 28 18:50:40 crc kubenswrapper[4985]: I0128 18:50:40.902564 4985 scope.go:117] "RemoveContainer" containerID="178c7940c1e7c85eaf00e787d93879f89e3e05e71f11cbc272b8188e9429d0c9" Jan 28 18:50:40 crc kubenswrapper[4985]: I0128 18:50:40.954838 4985 scope.go:117] "RemoveContainer" containerID="ea52163bdf8a3e8c42d7f0dbeffc6baafb9ed87c32e573d1569132ee3f06dfb6" Jan 28 18:50:41 crc kubenswrapper[4985]: I0128 18:50:41.025547 4985 scope.go:117] "RemoveContainer" containerID="382f43a07ac5b420a95def886ddd1d4454cef25ffaca287fa20c580c3c9e42fc" Jan 28 18:50:41 crc kubenswrapper[4985]: I0128 18:50:41.185837 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:50:41 crc kubenswrapper[4985]: I0128 18:50:41.185913 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:51:07 crc kubenswrapper[4985]: I0128 18:51:07.078975 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-559zx"] Jan 28 18:51:07 crc kubenswrapper[4985]: I0128 18:51:07.102362 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-559zx"] Jan 28 18:51:07 crc kubenswrapper[4985]: I0128 18:51:07.279431 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabefa44-123b-48ce-a38b-8c5f6ed32b73" path="/var/lib/kubelet/pods/aabefa44-123b-48ce-a38b-8c5f6ed32b73/volumes" Jan 28 18:51:09 crc kubenswrapper[4985]: I0128 18:51:09.815294 4985 generic.go:334] "Generic (PLEG): container finished" podID="ed5a5127-7214-4f45-bda0-a1c6ecbaaede" containerID="c9f2f497bdfc010d8b6ae9a2d144192486869cb9cba3b65990bd74b61e389db6" exitCode=0 Jan 28 18:51:09 crc kubenswrapper[4985]: I0128 18:51:09.815392 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" event={"ID":"ed5a5127-7214-4f45-bda0-a1c6ecbaaede","Type":"ContainerDied","Data":"c9f2f497bdfc010d8b6ae9a2d144192486869cb9cba3b65990bd74b61e389db6"} Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.187037 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.187626 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.187724 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.189527 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.189627 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" gracePeriod=600 Jan 28 18:51:11 crc kubenswrapper[4985]: E0128 18:51:11.333794 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.530358 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.646235 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-ssh-key-openstack-edpm-ipam\") pod \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.646343 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-inventory\") pod \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.646385 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l528l\" (UniqueName: \"kubernetes.io/projected/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-kube-api-access-l528l\") pod \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.656288 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-kube-api-access-l528l" (OuterVolumeSpecName: "kube-api-access-l528l") pod "ed5a5127-7214-4f45-bda0-a1c6ecbaaede" (UID: "ed5a5127-7214-4f45-bda0-a1c6ecbaaede"). InnerVolumeSpecName "kube-api-access-l528l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.684502 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-inventory" (OuterVolumeSpecName: "inventory") pod "ed5a5127-7214-4f45-bda0-a1c6ecbaaede" (UID: "ed5a5127-7214-4f45-bda0-a1c6ecbaaede"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.684978 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ed5a5127-7214-4f45-bda0-a1c6ecbaaede" (UID: "ed5a5127-7214-4f45-bda0-a1c6ecbaaede"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.749829 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.749863 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.749873 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l528l\" (UniqueName: \"kubernetes.io/projected/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-kube-api-access-l528l\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.845781 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" exitCode=0 Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.845906 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573"} Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.845963 4985 scope.go:117] "RemoveContainer" containerID="b39401ce5f91585a2b4b22e75d0e797d75465500360ec9051ef07c933730fe87" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.846770 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:51:11 crc kubenswrapper[4985]: E0128 18:51:11.847065 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.856895 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" event={"ID":"ed5a5127-7214-4f45-bda0-a1c6ecbaaede","Type":"ContainerDied","Data":"633bd975811338a8dd128feac23d6ada0361cd583588d9bc8c1c9bc2d16bbffc"} Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.856943 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="633bd975811338a8dd128feac23d6ada0361cd583588d9bc8c1c9bc2d16bbffc" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.857064 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.989702 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l"] Jan 28 18:51:11 crc kubenswrapper[4985]: E0128 18:51:11.990348 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5a5127-7214-4f45-bda0-a1c6ecbaaede" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.990367 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5a5127-7214-4f45-bda0-a1c6ecbaaede" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.990626 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed5a5127-7214-4f45-bda0-a1c6ecbaaede" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.992904 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.996023 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.996234 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.996553 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.997966 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.016850 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l"] Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.159396 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dtl5\" (UniqueName: \"kubernetes.io/projected/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-kube-api-access-2dtl5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.160022 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.160199 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.262684 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dtl5\" (UniqueName: \"kubernetes.io/projected/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-kube-api-access-2dtl5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.263078 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.263210 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.266974 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.267129 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.282053 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dtl5\" (UniqueName: \"kubernetes.io/projected/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-kube-api-access-2dtl5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.320163 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.874093 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l"] Jan 28 18:51:13 crc kubenswrapper[4985]: I0128 18:51:13.878773 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" event={"ID":"ae55970b-52a8-4bd7-8d82-853e9cd4ad32","Type":"ContainerStarted","Data":"98d7adb89708d071f297f54f218b65a95a49b9820984fb652f611d4d070a95ca"} Jan 28 18:51:13 crc kubenswrapper[4985]: I0128 18:51:13.879367 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" event={"ID":"ae55970b-52a8-4bd7-8d82-853e9cd4ad32","Type":"ContainerStarted","Data":"eae23b0ff4b25c1d144fc5ec4fddcb5528ef6851dd78e1e85edddba6a291da24"} Jan 28 18:51:13 crc kubenswrapper[4985]: I0128 18:51:13.899698 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" podStartSLOduration=2.4909154510000002 podStartE2EDuration="2.899664224s" podCreationTimestamp="2026-01-28 18:51:11 +0000 UTC" firstStartedPulling="2026-01-28 18:51:12.880857673 +0000 UTC m=+2283.707420494" lastFinishedPulling="2026-01-28 18:51:13.289606446 +0000 UTC m=+2284.116169267" observedRunningTime="2026-01-28 18:51:13.894191979 +0000 UTC m=+2284.720754820" watchObservedRunningTime="2026-01-28 18:51:13.899664224 +0000 UTC m=+2284.726227085" Jan 28 18:51:18 crc kubenswrapper[4985]: I0128 18:51:18.928951 4985 generic.go:334] "Generic (PLEG): container finished" podID="ae55970b-52a8-4bd7-8d82-853e9cd4ad32" containerID="98d7adb89708d071f297f54f218b65a95a49b9820984fb652f611d4d070a95ca" exitCode=0 Jan 28 18:51:18 crc kubenswrapper[4985]: I0128 18:51:18.929103 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" event={"ID":"ae55970b-52a8-4bd7-8d82-853e9cd4ad32","Type":"ContainerDied","Data":"98d7adb89708d071f297f54f218b65a95a49b9820984fb652f611d4d070a95ca"} Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.448784 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.573838 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-ssh-key-openstack-edpm-ipam\") pod \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.574010 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dtl5\" (UniqueName: \"kubernetes.io/projected/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-kube-api-access-2dtl5\") pod \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.574061 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-inventory\") pod \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.586206 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-kube-api-access-2dtl5" (OuterVolumeSpecName: "kube-api-access-2dtl5") pod "ae55970b-52a8-4bd7-8d82-853e9cd4ad32" (UID: "ae55970b-52a8-4bd7-8d82-853e9cd4ad32"). InnerVolumeSpecName "kube-api-access-2dtl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.610508 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-inventory" (OuterVolumeSpecName: "inventory") pod "ae55970b-52a8-4bd7-8d82-853e9cd4ad32" (UID: "ae55970b-52a8-4bd7-8d82-853e9cd4ad32"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.627944 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae55970b-52a8-4bd7-8d82-853e9cd4ad32" (UID: "ae55970b-52a8-4bd7-8d82-853e9cd4ad32"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.677487 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dtl5\" (UniqueName: \"kubernetes.io/projected/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-kube-api-access-2dtl5\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.677523 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.677533 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.950844 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" event={"ID":"ae55970b-52a8-4bd7-8d82-853e9cd4ad32","Type":"ContainerDied","Data":"eae23b0ff4b25c1d144fc5ec4fddcb5528ef6851dd78e1e85edddba6a291da24"} Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.950889 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eae23b0ff4b25c1d144fc5ec4fddcb5528ef6851dd78e1e85edddba6a291da24" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.950917 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.033095 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775"] Jan 28 18:51:21 crc kubenswrapper[4985]: E0128 18:51:21.034473 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae55970b-52a8-4bd7-8d82-853e9cd4ad32" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.034506 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae55970b-52a8-4bd7-8d82-853e9cd4ad32" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.034921 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae55970b-52a8-4bd7-8d82-853e9cd4ad32" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.036066 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.040170 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.040705 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.041406 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.041428 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.051273 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775"] Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.091505 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.092042 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.092338 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htjkn\" (UniqueName: \"kubernetes.io/projected/3baf8df5-1989-4678-8268-058f46511cfd-kube-api-access-htjkn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.194227 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htjkn\" (UniqueName: \"kubernetes.io/projected/3baf8df5-1989-4678-8268-058f46511cfd-kube-api-access-htjkn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.194403 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.194472 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.199817 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.204201 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.216386 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htjkn\" (UniqueName: \"kubernetes.io/projected/3baf8df5-1989-4678-8268-058f46511cfd-kube-api-access-htjkn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.384893 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.956667 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775"] Jan 28 18:51:21 crc kubenswrapper[4985]: W0128 18:51:21.962622 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3baf8df5_1989_4678_8268_058f46511cfd.slice/crio-de3c10e770d08fa92a7e1977751e9575e957222523a49e6ed9cb591a0045fa15 WatchSource:0}: Error finding container de3c10e770d08fa92a7e1977751e9575e957222523a49e6ed9cb591a0045fa15: Status 404 returned error can't find the container with id de3c10e770d08fa92a7e1977751e9575e957222523a49e6ed9cb591a0045fa15 Jan 28 18:51:22 crc kubenswrapper[4985]: I0128 18:51:22.984586 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" event={"ID":"3baf8df5-1989-4678-8268-058f46511cfd","Type":"ContainerStarted","Data":"4383685dfaa76d9d94b6ad842212a447752cd35bd7edf70dce99f868bdd8e572"} Jan 28 18:51:22 crc kubenswrapper[4985]: I0128 18:51:22.984993 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" event={"ID":"3baf8df5-1989-4678-8268-058f46511cfd","Type":"ContainerStarted","Data":"de3c10e770d08fa92a7e1977751e9575e957222523a49e6ed9cb591a0045fa15"} Jan 28 18:51:23 crc kubenswrapper[4985]: I0128 18:51:23.009945 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" podStartSLOduration=2.551916071 podStartE2EDuration="3.009921578s" podCreationTimestamp="2026-01-28 18:51:20 +0000 UTC" firstStartedPulling="2026-01-28 18:51:21.966603154 +0000 UTC m=+2292.793165975" lastFinishedPulling="2026-01-28 18:51:22.424608641 +0000 UTC m=+2293.251171482" observedRunningTime="2026-01-28 18:51:23.001448438 +0000 UTC m=+2293.828011259" watchObservedRunningTime="2026-01-28 18:51:23.009921578 +0000 UTC m=+2293.836484409" Jan 28 18:51:24 crc kubenswrapper[4985]: I0128 18:51:24.264441 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:51:24 crc kubenswrapper[4985]: E0128 18:51:24.265087 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:51:35 crc kubenswrapper[4985]: I0128 18:51:35.263881 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:51:35 crc kubenswrapper[4985]: E0128 18:51:35.264739 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:51:41 crc kubenswrapper[4985]: I0128 18:51:41.241685 4985 scope.go:117] "RemoveContainer" containerID="db5c8f620d59499400c9788d3b5dfb76a365065e272b490b2eae142e49cd78fa" Jan 28 18:51:47 crc kubenswrapper[4985]: I0128 18:51:47.264694 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:51:47 crc kubenswrapper[4985]: E0128 18:51:47.265727 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:51:57 crc kubenswrapper[4985]: I0128 18:51:57.359580 4985 generic.go:334] "Generic (PLEG): container finished" podID="3baf8df5-1989-4678-8268-058f46511cfd" containerID="4383685dfaa76d9d94b6ad842212a447752cd35bd7edf70dce99f868bdd8e572" exitCode=0 Jan 28 18:51:57 crc kubenswrapper[4985]: I0128 18:51:57.359700 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" event={"ID":"3baf8df5-1989-4678-8268-058f46511cfd","Type":"ContainerDied","Data":"4383685dfaa76d9d94b6ad842212a447752cd35bd7edf70dce99f868bdd8e572"} Jan 28 18:51:58 crc kubenswrapper[4985]: I0128 18:51:58.863745 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.062655 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htjkn\" (UniqueName: \"kubernetes.io/projected/3baf8df5-1989-4678-8268-058f46511cfd-kube-api-access-htjkn\") pod \"3baf8df5-1989-4678-8268-058f46511cfd\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.062936 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-inventory\") pod \"3baf8df5-1989-4678-8268-058f46511cfd\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.063058 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-ssh-key-openstack-edpm-ipam\") pod \"3baf8df5-1989-4678-8268-058f46511cfd\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.068597 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3baf8df5-1989-4678-8268-058f46511cfd-kube-api-access-htjkn" (OuterVolumeSpecName: "kube-api-access-htjkn") pod "3baf8df5-1989-4678-8268-058f46511cfd" (UID: "3baf8df5-1989-4678-8268-058f46511cfd"). InnerVolumeSpecName "kube-api-access-htjkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.099824 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-inventory" (OuterVolumeSpecName: "inventory") pod "3baf8df5-1989-4678-8268-058f46511cfd" (UID: "3baf8df5-1989-4678-8268-058f46511cfd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.107131 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3baf8df5-1989-4678-8268-058f46511cfd" (UID: "3baf8df5-1989-4678-8268-058f46511cfd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.166503 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.166779 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.166872 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htjkn\" (UniqueName: \"kubernetes.io/projected/3baf8df5-1989-4678-8268-058f46511cfd-kube-api-access-htjkn\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.382943 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" event={"ID":"3baf8df5-1989-4678-8268-058f46511cfd","Type":"ContainerDied","Data":"de3c10e770d08fa92a7e1977751e9575e957222523a49e6ed9cb591a0045fa15"} Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.382988 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de3c10e770d08fa92a7e1977751e9575e957222523a49e6ed9cb591a0045fa15" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.382993 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.463419 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc"] Jan 28 18:51:59 crc kubenswrapper[4985]: E0128 18:51:59.463900 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3baf8df5-1989-4678-8268-058f46511cfd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.463929 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3baf8df5-1989-4678-8268-058f46511cfd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.464218 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3baf8df5-1989-4678-8268-058f46511cfd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.465060 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.467749 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.467830 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.467977 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.468898 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.481465 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc"] Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.574989 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhtg5\" (UniqueName: \"kubernetes.io/projected/89fa72dd-7320-41fe-8df4-161d84d41b84-kube-api-access-nhtg5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.576014 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.576057 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.678719 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.679063 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.679344 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhtg5\" (UniqueName: \"kubernetes.io/projected/89fa72dd-7320-41fe-8df4-161d84d41b84-kube-api-access-nhtg5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.693363 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.693405 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.705840 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhtg5\" (UniqueName: \"kubernetes.io/projected/89fa72dd-7320-41fe-8df4-161d84d41b84-kube-api-access-nhtg5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.782340 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:52:00 crc kubenswrapper[4985]: I0128 18:52:00.264662 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:52:00 crc kubenswrapper[4985]: E0128 18:52:00.265266 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:52:00 crc kubenswrapper[4985]: I0128 18:52:00.375472 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc"] Jan 28 18:52:00 crc kubenswrapper[4985]: I0128 18:52:00.396754 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" event={"ID":"89fa72dd-7320-41fe-8df4-161d84d41b84","Type":"ContainerStarted","Data":"69e84bb4165150e69508936186fc071f5e407b53051ee5c709bb96091d6e8096"} Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.420617 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" event={"ID":"89fa72dd-7320-41fe-8df4-161d84d41b84","Type":"ContainerStarted","Data":"0e0531b2a17e581c154af6c43df638fbe2cddb08d8bf5196709cce369d24856b"} Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.443442 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" podStartSLOduration=1.769456439 podStartE2EDuration="2.443419345s" podCreationTimestamp="2026-01-28 18:51:59 +0000 UTC" firstStartedPulling="2026-01-28 18:52:00.373607941 +0000 UTC m=+2331.200170762" lastFinishedPulling="2026-01-28 18:52:01.047570847 +0000 UTC m=+2331.874133668" observedRunningTime="2026-01-28 18:52:01.437389034 +0000 UTC m=+2332.263951875" watchObservedRunningTime="2026-01-28 18:52:01.443419345 +0000 UTC m=+2332.269982166" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.692322 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-92ddg"] Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.694563 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.717900 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-92ddg"] Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.844140 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6mj\" (UniqueName: \"kubernetes.io/projected/2599bc38-c112-4351-a069-1e7f48fd913e-kube-api-access-wf6mj\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.844387 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-catalog-content\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.844468 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-utilities\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.946361 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6mj\" (UniqueName: \"kubernetes.io/projected/2599bc38-c112-4351-a069-1e7f48fd913e-kube-api-access-wf6mj\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.946558 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-catalog-content\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.946662 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-utilities\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.947080 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-catalog-content\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.947109 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-utilities\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.965323 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6mj\" (UniqueName: \"kubernetes.io/projected/2599bc38-c112-4351-a069-1e7f48fd913e-kube-api-access-wf6mj\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:02 crc kubenswrapper[4985]: I0128 18:52:02.032365 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:02 crc kubenswrapper[4985]: I0128 18:52:02.607183 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-92ddg"] Jan 28 18:52:03 crc kubenswrapper[4985]: I0128 18:52:03.446897 4985 generic.go:334] "Generic (PLEG): container finished" podID="2599bc38-c112-4351-a069-1e7f48fd913e" containerID="3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d" exitCode=0 Jan 28 18:52:03 crc kubenswrapper[4985]: I0128 18:52:03.447215 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerDied","Data":"3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d"} Jan 28 18:52:03 crc kubenswrapper[4985]: I0128 18:52:03.447302 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerStarted","Data":"c4796d97bbbc44e9555f2a920af4e29b811b1c5305de97b4f4d8ea5af4e33a12"} Jan 28 18:52:06 crc kubenswrapper[4985]: I0128 18:52:06.482943 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerStarted","Data":"8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0"} Jan 28 18:52:10 crc kubenswrapper[4985]: I0128 18:52:10.532784 4985 generic.go:334] "Generic (PLEG): container finished" podID="2599bc38-c112-4351-a069-1e7f48fd913e" containerID="8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0" exitCode=0 Jan 28 18:52:10 crc kubenswrapper[4985]: I0128 18:52:10.532906 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerDied","Data":"8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0"} Jan 28 18:52:13 crc kubenswrapper[4985]: I0128 18:52:13.265645 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:52:13 crc kubenswrapper[4985]: E0128 18:52:13.266493 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:52:13 crc kubenswrapper[4985]: I0128 18:52:13.584376 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerStarted","Data":"f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764"} Jan 28 18:52:13 crc kubenswrapper[4985]: I0128 18:52:13.605588 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-92ddg" podStartSLOduration=3.117826319 podStartE2EDuration="12.605562409s" podCreationTimestamp="2026-01-28 18:52:01 +0000 UTC" firstStartedPulling="2026-01-28 18:52:03.449571925 +0000 UTC m=+2334.276134756" lastFinishedPulling="2026-01-28 18:52:12.937308025 +0000 UTC m=+2343.763870846" observedRunningTime="2026-01-28 18:52:13.601188755 +0000 UTC m=+2344.427751606" watchObservedRunningTime="2026-01-28 18:52:13.605562409 +0000 UTC m=+2344.432125230" Jan 28 18:52:22 crc kubenswrapper[4985]: I0128 18:52:22.032699 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:22 crc kubenswrapper[4985]: I0128 18:52:22.036065 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:22 crc kubenswrapper[4985]: I0128 18:52:22.091601 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:22 crc kubenswrapper[4985]: I0128 18:52:22.806787 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:22 crc kubenswrapper[4985]: I0128 18:52:22.857911 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-92ddg"] Jan 28 18:52:24 crc kubenswrapper[4985]: I0128 18:52:24.787600 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-92ddg" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="registry-server" containerID="cri-o://f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764" gracePeriod=2 Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.344279 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.504114 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf6mj\" (UniqueName: \"kubernetes.io/projected/2599bc38-c112-4351-a069-1e7f48fd913e-kube-api-access-wf6mj\") pod \"2599bc38-c112-4351-a069-1e7f48fd913e\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.504336 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-utilities\") pod \"2599bc38-c112-4351-a069-1e7f48fd913e\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.504374 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-catalog-content\") pod \"2599bc38-c112-4351-a069-1e7f48fd913e\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.505395 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-utilities" (OuterVolumeSpecName: "utilities") pod "2599bc38-c112-4351-a069-1e7f48fd913e" (UID: "2599bc38-c112-4351-a069-1e7f48fd913e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.510943 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2599bc38-c112-4351-a069-1e7f48fd913e-kube-api-access-wf6mj" (OuterVolumeSpecName: "kube-api-access-wf6mj") pod "2599bc38-c112-4351-a069-1e7f48fd913e" (UID: "2599bc38-c112-4351-a069-1e7f48fd913e"). InnerVolumeSpecName "kube-api-access-wf6mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.556451 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2599bc38-c112-4351-a069-1e7f48fd913e" (UID: "2599bc38-c112-4351-a069-1e7f48fd913e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.606890 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf6mj\" (UniqueName: \"kubernetes.io/projected/2599bc38-c112-4351-a069-1e7f48fd913e-kube-api-access-wf6mj\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.606918 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.606927 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.807053 4985 generic.go:334] "Generic (PLEG): container finished" podID="2599bc38-c112-4351-a069-1e7f48fd913e" containerID="f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764" exitCode=0 Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.807103 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerDied","Data":"f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764"} Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.807130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerDied","Data":"c4796d97bbbc44e9555f2a920af4e29b811b1c5305de97b4f4d8ea5af4e33a12"} Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.807147 4985 scope.go:117] "RemoveContainer" containerID="f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.807243 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.848096 4985 scope.go:117] "RemoveContainer" containerID="8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.873482 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-92ddg"] Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.877522 4985 scope.go:117] "RemoveContainer" containerID="3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.888303 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-92ddg"] Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.940213 4985 scope.go:117] "RemoveContainer" containerID="f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764" Jan 28 18:52:25 crc kubenswrapper[4985]: E0128 18:52:25.940631 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764\": container with ID starting with f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764 not found: ID does not exist" containerID="f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.940664 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764"} err="failed to get container status \"f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764\": rpc error: code = NotFound desc = could not find container \"f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764\": container with ID starting with f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764 not found: ID does not exist" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.940684 4985 scope.go:117] "RemoveContainer" containerID="8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0" Jan 28 18:52:25 crc kubenswrapper[4985]: E0128 18:52:25.941088 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0\": container with ID starting with 8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0 not found: ID does not exist" containerID="8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.941125 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0"} err="failed to get container status \"8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0\": rpc error: code = NotFound desc = could not find container \"8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0\": container with ID starting with 8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0 not found: ID does not exist" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.941150 4985 scope.go:117] "RemoveContainer" containerID="3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d" Jan 28 18:52:25 crc kubenswrapper[4985]: E0128 18:52:25.941460 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d\": container with ID starting with 3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d not found: ID does not exist" containerID="3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.941534 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d"} err="failed to get container status \"3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d\": rpc error: code = NotFound desc = could not find container \"3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d\": container with ID starting with 3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d not found: ID does not exist" Jan 28 18:52:26 crc kubenswrapper[4985]: I0128 18:52:26.264697 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:52:26 crc kubenswrapper[4985]: E0128 18:52:26.264946 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:52:27 crc kubenswrapper[4985]: I0128 18:52:27.278474 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" path="/var/lib/kubelet/pods/2599bc38-c112-4351-a069-1e7f48fd913e/volumes" Jan 28 18:52:30 crc kubenswrapper[4985]: I0128 18:52:30.058952 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-r7ml7"] Jan 28 18:52:30 crc kubenswrapper[4985]: I0128 18:52:30.068966 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-r7ml7"] Jan 28 18:52:31 crc kubenswrapper[4985]: I0128 18:52:31.279676 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" path="/var/lib/kubelet/pods/627220be-fa5f-49a6-9c9e-b3ae5e49afec/volumes" Jan 28 18:52:40 crc kubenswrapper[4985]: I0128 18:52:40.264153 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:52:40 crc kubenswrapper[4985]: E0128 18:52:40.264952 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:52:41 crc kubenswrapper[4985]: I0128 18:52:41.303885 4985 scope.go:117] "RemoveContainer" containerID="48668effb10b8c0dfeaba93e4a156675d4c8985321775751a1f4f96f69975324" Jan 28 18:52:45 crc kubenswrapper[4985]: I0128 18:52:45.033563 4985 generic.go:334] "Generic (PLEG): container finished" podID="89fa72dd-7320-41fe-8df4-161d84d41b84" containerID="0e0531b2a17e581c154af6c43df638fbe2cddb08d8bf5196709cce369d24856b" exitCode=0 Jan 28 18:52:45 crc kubenswrapper[4985]: I0128 18:52:45.033654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" event={"ID":"89fa72dd-7320-41fe-8df4-161d84d41b84","Type":"ContainerDied","Data":"0e0531b2a17e581c154af6c43df638fbe2cddb08d8bf5196709cce369d24856b"} Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.623371 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.775235 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhtg5\" (UniqueName: \"kubernetes.io/projected/89fa72dd-7320-41fe-8df4-161d84d41b84-kube-api-access-nhtg5\") pod \"89fa72dd-7320-41fe-8df4-161d84d41b84\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.775718 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-ssh-key-openstack-edpm-ipam\") pod \"89fa72dd-7320-41fe-8df4-161d84d41b84\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.776122 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-inventory\") pod \"89fa72dd-7320-41fe-8df4-161d84d41b84\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.780572 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fa72dd-7320-41fe-8df4-161d84d41b84-kube-api-access-nhtg5" (OuterVolumeSpecName: "kube-api-access-nhtg5") pod "89fa72dd-7320-41fe-8df4-161d84d41b84" (UID: "89fa72dd-7320-41fe-8df4-161d84d41b84"). InnerVolumeSpecName "kube-api-access-nhtg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.808050 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "89fa72dd-7320-41fe-8df4-161d84d41b84" (UID: "89fa72dd-7320-41fe-8df4-161d84d41b84"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.817592 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-inventory" (OuterVolumeSpecName: "inventory") pod "89fa72dd-7320-41fe-8df4-161d84d41b84" (UID: "89fa72dd-7320-41fe-8df4-161d84d41b84"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.879004 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhtg5\" (UniqueName: \"kubernetes.io/projected/89fa72dd-7320-41fe-8df4-161d84d41b84-kube-api-access-nhtg5\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.879041 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.879050 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.055073 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" event={"ID":"89fa72dd-7320-41fe-8df4-161d84d41b84","Type":"ContainerDied","Data":"69e84bb4165150e69508936186fc071f5e407b53051ee5c709bb96091d6e8096"} Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.055119 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69e84bb4165150e69508936186fc071f5e407b53051ee5c709bb96091d6e8096" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.055128 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.161646 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pbrcd"] Jan 28 18:52:47 crc kubenswrapper[4985]: E0128 18:52:47.162201 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="extract-content" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.162217 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="extract-content" Jan 28 18:52:47 crc kubenswrapper[4985]: E0128 18:52:47.162232 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fa72dd-7320-41fe-8df4-161d84d41b84" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.162240 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fa72dd-7320-41fe-8df4-161d84d41b84" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:52:47 crc kubenswrapper[4985]: E0128 18:52:47.162422 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="registry-server" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.162432 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="registry-server" Jan 28 18:52:47 crc kubenswrapper[4985]: E0128 18:52:47.162458 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="extract-utilities" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.162465 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="extract-utilities" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.162672 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fa72dd-7320-41fe-8df4-161d84d41b84" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.162702 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="registry-server" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.163576 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.166240 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.167220 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.167308 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.167902 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.174797 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pbrcd"] Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.287041 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.287118 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc4kg\" (UniqueName: \"kubernetes.io/projected/99c460d4-80df-4aac-9fc5-20198855b361-kube-api-access-dc4kg\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.287158 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.389043 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.389118 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc4kg\" (UniqueName: \"kubernetes.io/projected/99c460d4-80df-4aac-9fc5-20198855b361-kube-api-access-dc4kg\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.389151 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.393442 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.394411 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.406736 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc4kg\" (UniqueName: \"kubernetes.io/projected/99c460d4-80df-4aac-9fc5-20198855b361-kube-api-access-dc4kg\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.500715 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:48 crc kubenswrapper[4985]: I0128 18:52:48.100555 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pbrcd"] Jan 28 18:52:48 crc kubenswrapper[4985]: E0128 18:52:48.107619 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99c460d4_80df_4aac_9fc5_20198855b361.slice/crio-21f3a114fb34bc172393e6035f99b2c7a47aa748ffdcd1a9d9718c53a6ff848d\": RecentStats: unable to find data in memory cache]" Jan 28 18:52:48 crc kubenswrapper[4985]: I0128 18:52:48.109133 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:52:49 crc kubenswrapper[4985]: I0128 18:52:49.076536 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" event={"ID":"99c460d4-80df-4aac-9fc5-20198855b361","Type":"ContainerStarted","Data":"21f3a114fb34bc172393e6035f99b2c7a47aa748ffdcd1a9d9718c53a6ff848d"} Jan 28 18:52:50 crc kubenswrapper[4985]: I0128 18:52:50.088449 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" event={"ID":"99c460d4-80df-4aac-9fc5-20198855b361","Type":"ContainerStarted","Data":"2741ec846d0c85b125eb72113b900c63992136cdaabaee56c98434e51f940177"} Jan 28 18:52:50 crc kubenswrapper[4985]: I0128 18:52:50.107238 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" podStartSLOduration=1.949089676 podStartE2EDuration="3.107213647s" podCreationTimestamp="2026-01-28 18:52:47 +0000 UTC" firstStartedPulling="2026-01-28 18:52:48.108891268 +0000 UTC m=+2378.935454089" lastFinishedPulling="2026-01-28 18:52:49.267015249 +0000 UTC m=+2380.093578060" observedRunningTime="2026-01-28 18:52:50.102196145 +0000 UTC m=+2380.928758956" watchObservedRunningTime="2026-01-28 18:52:50.107213647 +0000 UTC m=+2380.933776468" Jan 28 18:52:54 crc kubenswrapper[4985]: I0128 18:52:54.264233 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:52:54 crc kubenswrapper[4985]: E0128 18:52:54.264947 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:52:56 crc kubenswrapper[4985]: I0128 18:52:56.166921 4985 generic.go:334] "Generic (PLEG): container finished" podID="99c460d4-80df-4aac-9fc5-20198855b361" containerID="2741ec846d0c85b125eb72113b900c63992136cdaabaee56c98434e51f940177" exitCode=0 Jan 28 18:52:56 crc kubenswrapper[4985]: I0128 18:52:56.167226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" event={"ID":"99c460d4-80df-4aac-9fc5-20198855b361","Type":"ContainerDied","Data":"2741ec846d0c85b125eb72113b900c63992136cdaabaee56c98434e51f940177"} Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.669641 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.830138 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-ssh-key-openstack-edpm-ipam\") pod \"99c460d4-80df-4aac-9fc5-20198855b361\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.830283 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-inventory-0\") pod \"99c460d4-80df-4aac-9fc5-20198855b361\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.831152 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc4kg\" (UniqueName: \"kubernetes.io/projected/99c460d4-80df-4aac-9fc5-20198855b361-kube-api-access-dc4kg\") pod \"99c460d4-80df-4aac-9fc5-20198855b361\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.842321 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c460d4-80df-4aac-9fc5-20198855b361-kube-api-access-dc4kg" (OuterVolumeSpecName: "kube-api-access-dc4kg") pod "99c460d4-80df-4aac-9fc5-20198855b361" (UID: "99c460d4-80df-4aac-9fc5-20198855b361"). InnerVolumeSpecName "kube-api-access-dc4kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.862273 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "99c460d4-80df-4aac-9fc5-20198855b361" (UID: "99c460d4-80df-4aac-9fc5-20198855b361"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.863603 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "99c460d4-80df-4aac-9fc5-20198855b361" (UID: "99c460d4-80df-4aac-9fc5-20198855b361"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.934431 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.934474 4985 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.934486 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc4kg\" (UniqueName: \"kubernetes.io/projected/99c460d4-80df-4aac-9fc5-20198855b361-kube-api-access-dc4kg\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.199533 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" event={"ID":"99c460d4-80df-4aac-9fc5-20198855b361","Type":"ContainerDied","Data":"21f3a114fb34bc172393e6035f99b2c7a47aa748ffdcd1a9d9718c53a6ff848d"} Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.199576 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21f3a114fb34bc172393e6035f99b2c7a47aa748ffdcd1a9d9718c53a6ff848d" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.199660 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.273535 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l"] Jan 28 18:52:58 crc kubenswrapper[4985]: E0128 18:52:58.274272 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c460d4-80df-4aac-9fc5-20198855b361" containerName="ssh-known-hosts-edpm-deployment" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.274288 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c460d4-80df-4aac-9fc5-20198855b361" containerName="ssh-known-hosts-edpm-deployment" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.274519 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c460d4-80df-4aac-9fc5-20198855b361" containerName="ssh-known-hosts-edpm-deployment" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.275483 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.278318 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.278337 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.278551 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.279109 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.316677 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l"] Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.464236 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.464362 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxz7s\" (UniqueName: \"kubernetes.io/projected/748912b6-cdb7-40bc-875e-563d7913a6dd-kube-api-access-zxz7s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.464584 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.567421 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.567571 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxz7s\" (UniqueName: \"kubernetes.io/projected/748912b6-cdb7-40bc-875e-563d7913a6dd-kube-api-access-zxz7s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.568625 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.574144 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.579805 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.590782 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxz7s\" (UniqueName: \"kubernetes.io/projected/748912b6-cdb7-40bc-875e-563d7913a6dd-kube-api-access-zxz7s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.603678 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:59 crc kubenswrapper[4985]: I0128 18:52:59.144959 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l"] Jan 28 18:52:59 crc kubenswrapper[4985]: I0128 18:52:59.211956 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" event={"ID":"748912b6-cdb7-40bc-875e-563d7913a6dd","Type":"ContainerStarted","Data":"dcfe22e8dda947e5709a88443fa0516b970a985732c45bb442af182dc3677b50"} Jan 28 18:53:00 crc kubenswrapper[4985]: I0128 18:53:00.224312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" event={"ID":"748912b6-cdb7-40bc-875e-563d7913a6dd","Type":"ContainerStarted","Data":"1e43cecc1e91746954a01e7c22855fd2395a40008bd1135ede7e01312ad4e651"} Jan 28 18:53:00 crc kubenswrapper[4985]: I0128 18:53:00.248883 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" podStartSLOduration=1.8293697500000001 podStartE2EDuration="2.248867466s" podCreationTimestamp="2026-01-28 18:52:58 +0000 UTC" firstStartedPulling="2026-01-28 18:52:59.151427572 +0000 UTC m=+2389.977990393" lastFinishedPulling="2026-01-28 18:52:59.570925258 +0000 UTC m=+2390.397488109" observedRunningTime="2026-01-28 18:53:00.241907769 +0000 UTC m=+2391.068470590" watchObservedRunningTime="2026-01-28 18:53:00.248867466 +0000 UTC m=+2391.075430287" Jan 28 18:53:07 crc kubenswrapper[4985]: I0128 18:53:07.265492 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:53:07 crc kubenswrapper[4985]: E0128 18:53:07.267052 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:53:08 crc kubenswrapper[4985]: I0128 18:53:08.348396 4985 generic.go:334] "Generic (PLEG): container finished" podID="748912b6-cdb7-40bc-875e-563d7913a6dd" containerID="1e43cecc1e91746954a01e7c22855fd2395a40008bd1135ede7e01312ad4e651" exitCode=0 Jan 28 18:53:08 crc kubenswrapper[4985]: I0128 18:53:08.348527 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" event={"ID":"748912b6-cdb7-40bc-875e-563d7913a6dd","Type":"ContainerDied","Data":"1e43cecc1e91746954a01e7c22855fd2395a40008bd1135ede7e01312ad4e651"} Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.340280 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.380749 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" event={"ID":"748912b6-cdb7-40bc-875e-563d7913a6dd","Type":"ContainerDied","Data":"dcfe22e8dda947e5709a88443fa0516b970a985732c45bb442af182dc3677b50"} Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.381514 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcfe22e8dda947e5709a88443fa0516b970a985732c45bb442af182dc3677b50" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.380816 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.460188 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-inventory\") pod \"748912b6-cdb7-40bc-875e-563d7913a6dd\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.460306 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxz7s\" (UniqueName: \"kubernetes.io/projected/748912b6-cdb7-40bc-875e-563d7913a6dd-kube-api-access-zxz7s\") pod \"748912b6-cdb7-40bc-875e-563d7913a6dd\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.460457 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-ssh-key-openstack-edpm-ipam\") pod \"748912b6-cdb7-40bc-875e-563d7913a6dd\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.467105 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb"] Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.467474 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748912b6-cdb7-40bc-875e-563d7913a6dd-kube-api-access-zxz7s" (OuterVolumeSpecName: "kube-api-access-zxz7s") pod "748912b6-cdb7-40bc-875e-563d7913a6dd" (UID: "748912b6-cdb7-40bc-875e-563d7913a6dd"). InnerVolumeSpecName "kube-api-access-zxz7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:53:10 crc kubenswrapper[4985]: E0128 18:53:10.467821 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748912b6-cdb7-40bc-875e-563d7913a6dd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.467845 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="748912b6-cdb7-40bc-875e-563d7913a6dd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.468156 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="748912b6-cdb7-40bc-875e-563d7913a6dd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.469174 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.476832 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb"] Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.511782 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "748912b6-cdb7-40bc-875e-563d7913a6dd" (UID: "748912b6-cdb7-40bc-875e-563d7913a6dd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.522731 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-inventory" (OuterVolumeSpecName: "inventory") pod "748912b6-cdb7-40bc-875e-563d7913a6dd" (UID: "748912b6-cdb7-40bc-875e-563d7913a6dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.562782 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.562997 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rltjj\" (UniqueName: \"kubernetes.io/projected/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-kube-api-access-rltjj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.563052 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.563142 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.563155 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxz7s\" (UniqueName: \"kubernetes.io/projected/748912b6-cdb7-40bc-875e-563d7913a6dd-kube-api-access-zxz7s\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.563165 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.664867 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.664973 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.667412 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rltjj\" (UniqueName: \"kubernetes.io/projected/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-kube-api-access-rltjj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.668903 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.669263 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.682018 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rltjj\" (UniqueName: \"kubernetes.io/projected/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-kube-api-access-rltjj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.913834 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:11 crc kubenswrapper[4985]: I0128 18:53:11.446751 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb"] Jan 28 18:53:12 crc kubenswrapper[4985]: I0128 18:53:12.404931 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" event={"ID":"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1","Type":"ContainerStarted","Data":"2f9ce5b67c7b62c616f681b2a0211eaf2edaa3939c553a248a0d4ed67636d035"} Jan 28 18:53:12 crc kubenswrapper[4985]: I0128 18:53:12.405331 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" event={"ID":"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1","Type":"ContainerStarted","Data":"de71345ecee583b6977af81b154580f32016dcb1dd583e6778840ce7062e010c"} Jan 28 18:53:12 crc kubenswrapper[4985]: I0128 18:53:12.433935 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" podStartSLOduration=1.935634587 podStartE2EDuration="2.433916992s" podCreationTimestamp="2026-01-28 18:53:10 +0000 UTC" firstStartedPulling="2026-01-28 18:53:11.44546376 +0000 UTC m=+2402.272026581" lastFinishedPulling="2026-01-28 18:53:11.943746165 +0000 UTC m=+2402.770308986" observedRunningTime="2026-01-28 18:53:12.421018147 +0000 UTC m=+2403.247580968" watchObservedRunningTime="2026-01-28 18:53:12.433916992 +0000 UTC m=+2403.260479813" Jan 28 18:53:21 crc kubenswrapper[4985]: I0128 18:53:21.276983 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:53:21 crc kubenswrapper[4985]: E0128 18:53:21.278955 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:53:21 crc kubenswrapper[4985]: I0128 18:53:21.498412 4985 generic.go:334] "Generic (PLEG): container finished" podID="b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" containerID="2f9ce5b67c7b62c616f681b2a0211eaf2edaa3939c553a248a0d4ed67636d035" exitCode=0 Jan 28 18:53:21 crc kubenswrapper[4985]: I0128 18:53:21.498509 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" event={"ID":"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1","Type":"ContainerDied","Data":"2f9ce5b67c7b62c616f681b2a0211eaf2edaa3939c553a248a0d4ed67636d035"} Jan 28 18:53:22 crc kubenswrapper[4985]: I0128 18:53:22.966747 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.094493 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rltjj\" (UniqueName: \"kubernetes.io/projected/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-kube-api-access-rltjj\") pod \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.094563 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-inventory\") pod \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.094587 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-ssh-key-openstack-edpm-ipam\") pod \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.101195 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-kube-api-access-rltjj" (OuterVolumeSpecName: "kube-api-access-rltjj") pod "b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" (UID: "b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1"). InnerVolumeSpecName "kube-api-access-rltjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.131087 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-inventory" (OuterVolumeSpecName: "inventory") pod "b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" (UID: "b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.133007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" (UID: "b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.197148 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rltjj\" (UniqueName: \"kubernetes.io/projected/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-kube-api-access-rltjj\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.197179 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.197188 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.524589 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" event={"ID":"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1","Type":"ContainerDied","Data":"de71345ecee583b6977af81b154580f32016dcb1dd583e6778840ce7062e010c"} Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.525095 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de71345ecee583b6977af81b154580f32016dcb1dd583e6778840ce7062e010c" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.524655 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.639020 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl"] Jan 28 18:53:23 crc kubenswrapper[4985]: E0128 18:53:23.639574 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.639595 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.639830 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.641120 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.643517 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.645721 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.645839 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.646024 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.646226 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.646028 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.646418 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.646683 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.647436 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.665123 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl"] Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.811630 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.811908 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812065 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812462 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812690 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812799 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812891 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812942 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812977 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.813095 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.813181 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.813238 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbd4\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-kube-api-access-brbd4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.813372 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.813404 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.813442 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915430 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915527 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915561 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915592 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915667 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915697 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915725 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915779 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915826 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915859 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brbd4\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-kube-api-access-brbd4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915904 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915936 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915967 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.916053 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.916114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.916146 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.920603 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.922394 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.922820 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.923555 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.923673 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.924205 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.924465 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.925031 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.925122 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.925285 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.925380 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.925484 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.927367 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.932944 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.933042 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.938019 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brbd4\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-kube-api-access-brbd4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.961109 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:24 crc kubenswrapper[4985]: I0128 18:53:24.533348 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl"] Jan 28 18:53:25 crc kubenswrapper[4985]: I0128 18:53:25.547843 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" event={"ID":"50ce12a8-7d79-4fa2-a879-e3082ba41427","Type":"ContainerStarted","Data":"5cbc89b308fc84d66f980bf1fb8675be5069ce9c0cf07f70762c9a3fe97801e7"} Jan 28 18:53:25 crc kubenswrapper[4985]: I0128 18:53:25.548186 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" event={"ID":"50ce12a8-7d79-4fa2-a879-e3082ba41427","Type":"ContainerStarted","Data":"86b2b13c1f9b434c2e5a83de4df662da5429c9dffd92a5ab4c0c55d94d2c48a1"} Jan 28 18:53:25 crc kubenswrapper[4985]: I0128 18:53:25.571711 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" podStartSLOduration=2.104136209 podStartE2EDuration="2.571681895s" podCreationTimestamp="2026-01-28 18:53:23 +0000 UTC" firstStartedPulling="2026-01-28 18:53:24.602360205 +0000 UTC m=+2415.428923026" lastFinishedPulling="2026-01-28 18:53:25.069905891 +0000 UTC m=+2415.896468712" observedRunningTime="2026-01-28 18:53:25.570180543 +0000 UTC m=+2416.396743374" watchObservedRunningTime="2026-01-28 18:53:25.571681895 +0000 UTC m=+2416.398244716" Jan 28 18:53:32 crc kubenswrapper[4985]: I0128 18:53:32.264910 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:53:32 crc kubenswrapper[4985]: E0128 18:53:32.267060 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:53:47 crc kubenswrapper[4985]: I0128 18:53:47.264029 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:53:47 crc kubenswrapper[4985]: E0128 18:53:47.264902 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:53:58 crc kubenswrapper[4985]: I0128 18:53:58.265079 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:53:58 crc kubenswrapper[4985]: E0128 18:53:58.266124 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:54:03 crc kubenswrapper[4985]: I0128 18:54:03.060637 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-6bqfv"] Jan 28 18:54:03 crc kubenswrapper[4985]: I0128 18:54:03.076719 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-6bqfv"] Jan 28 18:54:03 crc kubenswrapper[4985]: I0128 18:54:03.278032 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d276e0b0-f662-443c-a126-003ee44287c8" path="/var/lib/kubelet/pods/d276e0b0-f662-443c-a126-003ee44287c8/volumes" Jan 28 18:54:05 crc kubenswrapper[4985]: I0128 18:54:05.017814 4985 generic.go:334] "Generic (PLEG): container finished" podID="50ce12a8-7d79-4fa2-a879-e3082ba41427" containerID="5cbc89b308fc84d66f980bf1fb8675be5069ce9c0cf07f70762c9a3fe97801e7" exitCode=0 Jan 28 18:54:05 crc kubenswrapper[4985]: I0128 18:54:05.018086 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" event={"ID":"50ce12a8-7d79-4fa2-a879-e3082ba41427","Type":"ContainerDied","Data":"5cbc89b308fc84d66f980bf1fb8675be5069ce9c0cf07f70762c9a3fe97801e7"} Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.496924 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.592726 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-nova-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.592837 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ssh-key-openstack-edpm-ipam\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.592882 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-libvirt-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.592927 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.592959 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-repo-setup-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593051 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-power-monitoring-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593077 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-ovn-default-certs-0\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593124 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-bootstrap-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593158 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ovn-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593187 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593230 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593312 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brbd4\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-kube-api-access-brbd4\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593329 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-neutron-metadata-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593359 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593433 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-inventory\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593478 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.601008 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.601435 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.601485 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-kube-api-access-brbd4" (OuterVolumeSpecName: "kube-api-access-brbd4") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "kube-api-access-brbd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.603300 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.603850 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.604068 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.604915 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.606879 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.607359 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.607875 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.608187 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.608988 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.609462 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.610474 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.636671 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.641470 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-inventory" (OuterVolumeSpecName: "inventory") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696687 4985 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696746 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696765 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696781 4985 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696795 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brbd4\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-kube-api-access-brbd4\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696810 4985 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696828 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696843 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696858 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696873 4985 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696885 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696916 4985 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696931 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696947 4985 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696961 4985 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696977 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.040085 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" event={"ID":"50ce12a8-7d79-4fa2-a879-e3082ba41427","Type":"ContainerDied","Data":"86b2b13c1f9b434c2e5a83de4df662da5429c9dffd92a5ab4c0c55d94d2c48a1"} Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.040136 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86b2b13c1f9b434c2e5a83de4df662da5429c9dffd92a5ab4c0c55d94d2c48a1" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.040161 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.192562 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw"] Jan 28 18:54:07 crc kubenswrapper[4985]: E0128 18:54:07.193289 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ce12a8-7d79-4fa2-a879-e3082ba41427" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.193307 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ce12a8-7d79-4fa2-a879-e3082ba41427" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.193564 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ce12a8-7d79-4fa2-a879-e3082ba41427" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.194437 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.200839 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.201012 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.201132 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.201240 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.201373 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.209533 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw"] Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.312870 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw4l2\" (UniqueName: \"kubernetes.io/projected/7b281922-4bb4-45f8-b633-d82925f4814e-kube-api-access-gw4l2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.312950 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.312980 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.313013 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.313049 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b281922-4bb4-45f8-b633-d82925f4814e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.415413 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw4l2\" (UniqueName: \"kubernetes.io/projected/7b281922-4bb4-45f8-b633-d82925f4814e-kube-api-access-gw4l2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.415505 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.415552 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.415600 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.415653 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b281922-4bb4-45f8-b633-d82925f4814e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.416691 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b281922-4bb4-45f8-b633-d82925f4814e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.420745 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.428979 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.434062 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw4l2\" (UniqueName: \"kubernetes.io/projected/7b281922-4bb4-45f8-b633-d82925f4814e-kube-api-access-gw4l2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.448569 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.528953 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:08 crc kubenswrapper[4985]: I0128 18:54:08.098954 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw"] Jan 28 18:54:09 crc kubenswrapper[4985]: I0128 18:54:09.065102 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" event={"ID":"7b281922-4bb4-45f8-b633-d82925f4814e","Type":"ContainerStarted","Data":"a6bdec8510499a26c27cbda2b2c45b9cd3c5e0612fdc037ef6a4027ab34f7027"} Jan 28 18:54:09 crc kubenswrapper[4985]: I0128 18:54:09.068269 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" event={"ID":"7b281922-4bb4-45f8-b633-d82925f4814e","Type":"ContainerStarted","Data":"9be887f338d3681c1a810a44831d9c5beb00ea3f830c83597e9b4895f61de618"} Jan 28 18:54:10 crc kubenswrapper[4985]: I0128 18:54:10.104082 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" podStartSLOduration=2.4608100840000002 podStartE2EDuration="3.10405819s" podCreationTimestamp="2026-01-28 18:54:07 +0000 UTC" firstStartedPulling="2026-01-28 18:54:08.105457113 +0000 UTC m=+2458.932019934" lastFinishedPulling="2026-01-28 18:54:08.748705219 +0000 UTC m=+2459.575268040" observedRunningTime="2026-01-28 18:54:10.098190174 +0000 UTC m=+2460.924753005" watchObservedRunningTime="2026-01-28 18:54:10.10405819 +0000 UTC m=+2460.930621011" Jan 28 18:54:12 crc kubenswrapper[4985]: I0128 18:54:12.265497 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:54:12 crc kubenswrapper[4985]: E0128 18:54:12.266526 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:54:23 crc kubenswrapper[4985]: I0128 18:54:23.264984 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:54:23 crc kubenswrapper[4985]: E0128 18:54:23.265895 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:54:38 crc kubenswrapper[4985]: I0128 18:54:38.265114 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:54:38 crc kubenswrapper[4985]: E0128 18:54:38.265991 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:54:41 crc kubenswrapper[4985]: I0128 18:54:41.407560 4985 scope.go:117] "RemoveContainer" containerID="7dec6fdf3bc8770aef28236161fb96819a55a36d37cd04df32abd054cd4e7c4d" Jan 28 18:54:52 crc kubenswrapper[4985]: I0128 18:54:52.264983 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:54:52 crc kubenswrapper[4985]: E0128 18:54:52.266288 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:55:03 crc kubenswrapper[4985]: I0128 18:55:03.264547 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:55:03 crc kubenswrapper[4985]: E0128 18:55:03.265879 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:55:06 crc kubenswrapper[4985]: I0128 18:55:06.768174 4985 generic.go:334] "Generic (PLEG): container finished" podID="7b281922-4bb4-45f8-b633-d82925f4814e" containerID="a6bdec8510499a26c27cbda2b2c45b9cd3c5e0612fdc037ef6a4027ab34f7027" exitCode=0 Jan 28 18:55:06 crc kubenswrapper[4985]: I0128 18:55:06.768269 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" event={"ID":"7b281922-4bb4-45f8-b633-d82925f4814e","Type":"ContainerDied","Data":"a6bdec8510499a26c27cbda2b2c45b9cd3c5e0612fdc037ef6a4027ab34f7027"} Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.336798 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.432934 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw4l2\" (UniqueName: \"kubernetes.io/projected/7b281922-4bb4-45f8-b633-d82925f4814e-kube-api-access-gw4l2\") pod \"7b281922-4bb4-45f8-b633-d82925f4814e\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.433099 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b281922-4bb4-45f8-b633-d82925f4814e-ovncontroller-config-0\") pod \"7b281922-4bb4-45f8-b633-d82925f4814e\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.433235 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ssh-key-openstack-edpm-ipam\") pod \"7b281922-4bb4-45f8-b633-d82925f4814e\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.433332 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-inventory\") pod \"7b281922-4bb4-45f8-b633-d82925f4814e\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.433446 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ovn-combined-ca-bundle\") pod \"7b281922-4bb4-45f8-b633-d82925f4814e\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.481240 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b281922-4bb4-45f8-b633-d82925f4814e-kube-api-access-gw4l2" (OuterVolumeSpecName: "kube-api-access-gw4l2") pod "7b281922-4bb4-45f8-b633-d82925f4814e" (UID: "7b281922-4bb4-45f8-b633-d82925f4814e"). InnerVolumeSpecName "kube-api-access-gw4l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.494571 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7b281922-4bb4-45f8-b633-d82925f4814e" (UID: "7b281922-4bb4-45f8-b633-d82925f4814e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.545674 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.545717 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw4l2\" (UniqueName: \"kubernetes.io/projected/7b281922-4bb4-45f8-b633-d82925f4814e-kube-api-access-gw4l2\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.589535 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b281922-4bb4-45f8-b633-d82925f4814e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7b281922-4bb4-45f8-b633-d82925f4814e" (UID: "7b281922-4bb4-45f8-b633-d82925f4814e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.592589 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-inventory" (OuterVolumeSpecName: "inventory") pod "7b281922-4bb4-45f8-b633-d82925f4814e" (UID: "7b281922-4bb4-45f8-b633-d82925f4814e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.614481 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7b281922-4bb4-45f8-b633-d82925f4814e" (UID: "7b281922-4bb4-45f8-b633-d82925f4814e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.648524 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.648567 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.648580 4985 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b281922-4bb4-45f8-b633-d82925f4814e-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.789597 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" event={"ID":"7b281922-4bb4-45f8-b633-d82925f4814e","Type":"ContainerDied","Data":"9be887f338d3681c1a810a44831d9c5beb00ea3f830c83597e9b4895f61de618"} Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.789958 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9be887f338d3681c1a810a44831d9c5beb00ea3f830c83597e9b4895f61de618" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.789669 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.905099 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr"] Jan 28 18:55:08 crc kubenswrapper[4985]: E0128 18:55:08.905725 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b281922-4bb4-45f8-b633-d82925f4814e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.905748 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b281922-4bb4-45f8-b633-d82925f4814e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.905997 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b281922-4bb4-45f8-b633-d82925f4814e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.906996 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.910194 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.911035 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.911395 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.911472 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.911615 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.911624 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.922938 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr"] Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.955966 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.956058 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.956130 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.956206 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgs7j\" (UniqueName: \"kubernetes.io/projected/85887caf-94f1-4f74-820c-edba2628a8e6-kube-api-access-rgs7j\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.956283 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.956640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.060420 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.060655 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.061410 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.061491 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.061537 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgs7j\" (UniqueName: \"kubernetes.io/projected/85887caf-94f1-4f74-820c-edba2628a8e6-kube-api-access-rgs7j\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.061591 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.068046 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.068879 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.069085 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.070527 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.071929 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.084660 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgs7j\" (UniqueName: \"kubernetes.io/projected/85887caf-94f1-4f74-820c-edba2628a8e6-kube-api-access-rgs7j\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.259271 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.920033 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr"] Jan 28 18:55:10 crc kubenswrapper[4985]: I0128 18:55:10.827084 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" event={"ID":"85887caf-94f1-4f74-820c-edba2628a8e6","Type":"ContainerStarted","Data":"ac963befe690cdc1d35b858dad9c3859445a9726968785eea97d6ee2715cdae8"} Jan 28 18:55:10 crc kubenswrapper[4985]: I0128 18:55:10.827556 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" event={"ID":"85887caf-94f1-4f74-820c-edba2628a8e6","Type":"ContainerStarted","Data":"6eb7939a3a6d53cf73783e1b7daf079cac16b5c1e0797439ef1444a93fe33322"} Jan 28 18:55:10 crc kubenswrapper[4985]: I0128 18:55:10.844435 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" podStartSLOduration=2.257632971 podStartE2EDuration="2.84441119s" podCreationTimestamp="2026-01-28 18:55:08 +0000 UTC" firstStartedPulling="2026-01-28 18:55:09.926120553 +0000 UTC m=+2520.752683374" lastFinishedPulling="2026-01-28 18:55:10.512898772 +0000 UTC m=+2521.339461593" observedRunningTime="2026-01-28 18:55:10.843040342 +0000 UTC m=+2521.669603163" watchObservedRunningTime="2026-01-28 18:55:10.84441119 +0000 UTC m=+2521.670974011" Jan 28 18:55:15 crc kubenswrapper[4985]: I0128 18:55:15.265374 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:55:15 crc kubenswrapper[4985]: E0128 18:55:15.266510 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:55:29 crc kubenswrapper[4985]: I0128 18:55:29.264441 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:55:29 crc kubenswrapper[4985]: E0128 18:55:29.266080 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:55:44 crc kubenswrapper[4985]: I0128 18:55:44.264759 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:55:44 crc kubenswrapper[4985]: E0128 18:55:44.265513 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:55:56 crc kubenswrapper[4985]: I0128 18:55:56.375792 4985 generic.go:334] "Generic (PLEG): container finished" podID="85887caf-94f1-4f74-820c-edba2628a8e6" containerID="ac963befe690cdc1d35b858dad9c3859445a9726968785eea97d6ee2715cdae8" exitCode=0 Jan 28 18:55:56 crc kubenswrapper[4985]: I0128 18:55:56.375898 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" event={"ID":"85887caf-94f1-4f74-820c-edba2628a8e6","Type":"ContainerDied","Data":"ac963befe690cdc1d35b858dad9c3859445a9726968785eea97d6ee2715cdae8"} Jan 28 18:55:57 crc kubenswrapper[4985]: I0128 18:55:57.264564 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:55:57 crc kubenswrapper[4985]: E0128 18:55:57.265148 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:55:57 crc kubenswrapper[4985]: I0128 18:55:57.901891 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.064158 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-metadata-combined-ca-bundle\") pod \"85887caf-94f1-4f74-820c-edba2628a8e6\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.064459 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"85887caf-94f1-4f74-820c-edba2628a8e6\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.064720 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgs7j\" (UniqueName: \"kubernetes.io/projected/85887caf-94f1-4f74-820c-edba2628a8e6-kube-api-access-rgs7j\") pod \"85887caf-94f1-4f74-820c-edba2628a8e6\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.064881 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-ssh-key-openstack-edpm-ipam\") pod \"85887caf-94f1-4f74-820c-edba2628a8e6\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.065063 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-nova-metadata-neutron-config-0\") pod \"85887caf-94f1-4f74-820c-edba2628a8e6\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.065213 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-inventory\") pod \"85887caf-94f1-4f74-820c-edba2628a8e6\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.070662 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "85887caf-94f1-4f74-820c-edba2628a8e6" (UID: "85887caf-94f1-4f74-820c-edba2628a8e6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.071199 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85887caf-94f1-4f74-820c-edba2628a8e6-kube-api-access-rgs7j" (OuterVolumeSpecName: "kube-api-access-rgs7j") pod "85887caf-94f1-4f74-820c-edba2628a8e6" (UID: "85887caf-94f1-4f74-820c-edba2628a8e6"). InnerVolumeSpecName "kube-api-access-rgs7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.098749 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "85887caf-94f1-4f74-820c-edba2628a8e6" (UID: "85887caf-94f1-4f74-820c-edba2628a8e6"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.104037 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "85887caf-94f1-4f74-820c-edba2628a8e6" (UID: "85887caf-94f1-4f74-820c-edba2628a8e6"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.104831 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-inventory" (OuterVolumeSpecName: "inventory") pod "85887caf-94f1-4f74-820c-edba2628a8e6" (UID: "85887caf-94f1-4f74-820c-edba2628a8e6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.122760 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "85887caf-94f1-4f74-820c-edba2628a8e6" (UID: "85887caf-94f1-4f74-820c-edba2628a8e6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.168685 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.168735 4985 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.168754 4985 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.168767 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgs7j\" (UniqueName: \"kubernetes.io/projected/85887caf-94f1-4f74-820c-edba2628a8e6-kube-api-access-rgs7j\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.168779 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.168789 4985 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.406229 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" event={"ID":"85887caf-94f1-4f74-820c-edba2628a8e6","Type":"ContainerDied","Data":"6eb7939a3a6d53cf73783e1b7daf079cac16b5c1e0797439ef1444a93fe33322"} Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.406281 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eb7939a3a6d53cf73783e1b7daf079cac16b5c1e0797439ef1444a93fe33322" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.406343 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.644934 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9"] Jan 28 18:55:58 crc kubenswrapper[4985]: E0128 18:55:58.645775 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85887caf-94f1-4f74-820c-edba2628a8e6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.645795 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="85887caf-94f1-4f74-820c-edba2628a8e6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.646073 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="85887caf-94f1-4f74-820c-edba2628a8e6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.646879 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.650994 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.651223 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.651313 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.651343 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.651412 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.670487 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9"] Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.703284 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.703332 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvwzc\" (UniqueName: \"kubernetes.io/projected/05f3f537-0392-45c7-af0d-36294670ed29-kube-api-access-tvwzc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.703391 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.703660 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.704000 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.805388 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.805507 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.806306 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.806377 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.806401 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvwzc\" (UniqueName: \"kubernetes.io/projected/05f3f537-0392-45c7-af0d-36294670ed29-kube-api-access-tvwzc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.810742 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.811509 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.812728 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.814024 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.831660 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvwzc\" (UniqueName: \"kubernetes.io/projected/05f3f537-0392-45c7-af0d-36294670ed29-kube-api-access-tvwzc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.972979 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:59 crc kubenswrapper[4985]: I0128 18:55:59.566306 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9"] Jan 28 18:56:00 crc kubenswrapper[4985]: I0128 18:56:00.430578 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" event={"ID":"05f3f537-0392-45c7-af0d-36294670ed29","Type":"ContainerStarted","Data":"76da32fa2dc8d40e8fb07f71ee0b743aebd23afd91508409999c0fb1c42f6834"} Jan 28 18:56:02 crc kubenswrapper[4985]: I0128 18:56:02.464964 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" event={"ID":"05f3f537-0392-45c7-af0d-36294670ed29","Type":"ContainerStarted","Data":"0cd11f134e26fd5286a737ab22f5900bfc3ffc7a04b1b4a5333939680ca416d2"} Jan 28 18:56:02 crc kubenswrapper[4985]: I0128 18:56:02.488054 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" podStartSLOduration=2.656869108 podStartE2EDuration="4.488027348s" podCreationTimestamp="2026-01-28 18:55:58 +0000 UTC" firstStartedPulling="2026-01-28 18:55:59.585865382 +0000 UTC m=+2570.412428203" lastFinishedPulling="2026-01-28 18:56:01.417023622 +0000 UTC m=+2572.243586443" observedRunningTime="2026-01-28 18:56:02.482636086 +0000 UTC m=+2573.309198907" watchObservedRunningTime="2026-01-28 18:56:02.488027348 +0000 UTC m=+2573.314590169" Jan 28 18:56:11 crc kubenswrapper[4985]: I0128 18:56:11.273932 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:56:11 crc kubenswrapper[4985]: I0128 18:56:11.565024 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"5a8c9d2caebf9577d32e5d0f94fe2ab9bc2dff20b5b793ce82c0ec429e6181e4"} Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.719724 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m2zw4"] Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.722845 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.736406 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m2zw4"] Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.777241 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-catalog-content\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.777509 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkq64\" (UniqueName: \"kubernetes.io/projected/0ef513f4-9311-4ca7-ba53-391e37295f4d-kube-api-access-jkq64\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.777853 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-utilities\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.880776 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkq64\" (UniqueName: \"kubernetes.io/projected/0ef513f4-9311-4ca7-ba53-391e37295f4d-kube-api-access-jkq64\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.880912 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-utilities\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.880961 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-catalog-content\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.881635 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-utilities\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.881661 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-catalog-content\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.900784 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkq64\" (UniqueName: \"kubernetes.io/projected/0ef513f4-9311-4ca7-ba53-391e37295f4d-kube-api-access-jkq64\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:38 crc kubenswrapper[4985]: I0128 18:57:38.050715 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:38 crc kubenswrapper[4985]: I0128 18:57:38.581098 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m2zw4"] Jan 28 18:57:39 crc kubenswrapper[4985]: I0128 18:57:39.556463 4985 generic.go:334] "Generic (PLEG): container finished" podID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerID="32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634" exitCode=0 Jan 28 18:57:39 crc kubenswrapper[4985]: I0128 18:57:39.557751 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerDied","Data":"32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634"} Jan 28 18:57:39 crc kubenswrapper[4985]: I0128 18:57:39.557791 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerStarted","Data":"86e211ca3609ca2214d96788321bae078f1513b7cc9bb22c267e07e77fc71907"} Jan 28 18:57:41 crc kubenswrapper[4985]: I0128 18:57:41.598535 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerStarted","Data":"82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5"} Jan 28 18:57:45 crc kubenswrapper[4985]: I0128 18:57:45.645300 4985 generic.go:334] "Generic (PLEG): container finished" podID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerID="82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5" exitCode=0 Jan 28 18:57:45 crc kubenswrapper[4985]: I0128 18:57:45.645346 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerDied","Data":"82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5"} Jan 28 18:57:47 crc kubenswrapper[4985]: I0128 18:57:47.670735 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerStarted","Data":"5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e"} Jan 28 18:57:47 crc kubenswrapper[4985]: I0128 18:57:47.702829 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m2zw4" podStartSLOduration=2.887646418 podStartE2EDuration="10.702803585s" podCreationTimestamp="2026-01-28 18:57:37 +0000 UTC" firstStartedPulling="2026-01-28 18:57:39.562872011 +0000 UTC m=+2670.389434842" lastFinishedPulling="2026-01-28 18:57:47.378029188 +0000 UTC m=+2678.204592009" observedRunningTime="2026-01-28 18:57:47.692805332 +0000 UTC m=+2678.519368153" watchObservedRunningTime="2026-01-28 18:57:47.702803585 +0000 UTC m=+2678.529366416" Jan 28 18:57:48 crc kubenswrapper[4985]: I0128 18:57:48.051322 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:48 crc kubenswrapper[4985]: I0128 18:57:48.051643 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:49 crc kubenswrapper[4985]: I0128 18:57:49.110035 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m2zw4" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="registry-server" probeResult="failure" output=< Jan 28 18:57:49 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:57:49 crc kubenswrapper[4985]: > Jan 28 18:57:58 crc kubenswrapper[4985]: I0128 18:57:58.099932 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:58 crc kubenswrapper[4985]: I0128 18:57:58.154111 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:58 crc kubenswrapper[4985]: I0128 18:57:58.363419 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m2zw4"] Jan 28 18:57:59 crc kubenswrapper[4985]: I0128 18:57:59.822392 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m2zw4" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="registry-server" containerID="cri-o://5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e" gracePeriod=2 Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.447399 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.540896 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-catalog-content\") pod \"0ef513f4-9311-4ca7-ba53-391e37295f4d\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.541722 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkq64\" (UniqueName: \"kubernetes.io/projected/0ef513f4-9311-4ca7-ba53-391e37295f4d-kube-api-access-jkq64\") pod \"0ef513f4-9311-4ca7-ba53-391e37295f4d\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.541784 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-utilities\") pod \"0ef513f4-9311-4ca7-ba53-391e37295f4d\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.543907 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-utilities" (OuterVolumeSpecName: "utilities") pod "0ef513f4-9311-4ca7-ba53-391e37295f4d" (UID: "0ef513f4-9311-4ca7-ba53-391e37295f4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.572540 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef513f4-9311-4ca7-ba53-391e37295f4d-kube-api-access-jkq64" (OuterVolumeSpecName: "kube-api-access-jkq64") pod "0ef513f4-9311-4ca7-ba53-391e37295f4d" (UID: "0ef513f4-9311-4ca7-ba53-391e37295f4d"). InnerVolumeSpecName "kube-api-access-jkq64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.645464 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkq64\" (UniqueName: \"kubernetes.io/projected/0ef513f4-9311-4ca7-ba53-391e37295f4d-kube-api-access-jkq64\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.645523 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.702188 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ef513f4-9311-4ca7-ba53-391e37295f4d" (UID: "0ef513f4-9311-4ca7-ba53-391e37295f4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.747239 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.836537 4985 generic.go:334] "Generic (PLEG): container finished" podID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerID="5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e" exitCode=0 Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.836609 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerDied","Data":"5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e"} Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.836645 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerDied","Data":"86e211ca3609ca2214d96788321bae078f1513b7cc9bb22c267e07e77fc71907"} Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.836667 4985 scope.go:117] "RemoveContainer" containerID="5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.836691 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.863305 4985 scope.go:117] "RemoveContainer" containerID="82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.893031 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m2zw4"] Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.907331 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m2zw4"] Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.910214 4985 scope.go:117] "RemoveContainer" containerID="32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.969161 4985 scope.go:117] "RemoveContainer" containerID="5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e" Jan 28 18:58:00 crc kubenswrapper[4985]: E0128 18:58:00.969574 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e\": container with ID starting with 5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e not found: ID does not exist" containerID="5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.969624 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e"} err="failed to get container status \"5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e\": rpc error: code = NotFound desc = could not find container \"5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e\": container with ID starting with 5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e not found: ID does not exist" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.969656 4985 scope.go:117] "RemoveContainer" containerID="82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5" Jan 28 18:58:00 crc kubenswrapper[4985]: E0128 18:58:00.970305 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5\": container with ID starting with 82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5 not found: ID does not exist" containerID="82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.970340 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5"} err="failed to get container status \"82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5\": rpc error: code = NotFound desc = could not find container \"82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5\": container with ID starting with 82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5 not found: ID does not exist" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.970364 4985 scope.go:117] "RemoveContainer" containerID="32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634" Jan 28 18:58:00 crc kubenswrapper[4985]: E0128 18:58:00.970680 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634\": container with ID starting with 32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634 not found: ID does not exist" containerID="32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.970734 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634"} err="failed to get container status \"32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634\": rpc error: code = NotFound desc = could not find container \"32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634\": container with ID starting with 32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634 not found: ID does not exist" Jan 28 18:58:01 crc kubenswrapper[4985]: I0128 18:58:01.276190 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" path="/var/lib/kubelet/pods/0ef513f4-9311-4ca7-ba53-391e37295f4d/volumes" Jan 28 18:58:11 crc kubenswrapper[4985]: I0128 18:58:11.185763 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:58:11 crc kubenswrapper[4985]: I0128 18:58:11.186317 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.912827 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zq8sk"] Jan 28 18:58:15 crc kubenswrapper[4985]: E0128 18:58:15.914132 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="extract-content" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.914149 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="extract-content" Jan 28 18:58:15 crc kubenswrapper[4985]: E0128 18:58:15.914188 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="registry-server" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.914194 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="registry-server" Jan 28 18:58:15 crc kubenswrapper[4985]: E0128 18:58:15.914207 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="extract-utilities" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.914213 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="extract-utilities" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.914432 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="registry-server" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.916245 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.929726 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zq8sk"] Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.054677 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh9ld\" (UniqueName: \"kubernetes.io/projected/50eaf46c-c5a3-45ec-98bb-0a22105daf95-kube-api-access-lh9ld\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.054986 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-utilities\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.055503 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-catalog-content\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.159115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-catalog-content\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.159193 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh9ld\" (UniqueName: \"kubernetes.io/projected/50eaf46c-c5a3-45ec-98bb-0a22105daf95-kube-api-access-lh9ld\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.159325 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-utilities\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.159597 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-catalog-content\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.159734 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-utilities\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.190087 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh9ld\" (UniqueName: \"kubernetes.io/projected/50eaf46c-c5a3-45ec-98bb-0a22105daf95-kube-api-access-lh9ld\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.237056 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.722155 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zq8sk"] Jan 28 18:58:17 crc kubenswrapper[4985]: I0128 18:58:17.026678 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerStarted","Data":"6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765"} Jan 28 18:58:17 crc kubenswrapper[4985]: I0128 18:58:17.027016 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerStarted","Data":"d8ba6f044075ced785fa9cc45c5e2817c626522b7cd0479bc64d80543a554620"} Jan 28 18:58:18 crc kubenswrapper[4985]: I0128 18:58:18.044591 4985 generic.go:334] "Generic (PLEG): container finished" podID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerID="6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765" exitCode=0 Jan 28 18:58:18 crc kubenswrapper[4985]: I0128 18:58:18.044650 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerDied","Data":"6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765"} Jan 28 18:58:18 crc kubenswrapper[4985]: I0128 18:58:18.048011 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:58:19 crc kubenswrapper[4985]: I0128 18:58:19.058428 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerStarted","Data":"3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0"} Jan 28 18:58:20 crc kubenswrapper[4985]: I0128 18:58:20.074646 4985 generic.go:334] "Generic (PLEG): container finished" podID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerID="3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0" exitCode=0 Jan 28 18:58:20 crc kubenswrapper[4985]: I0128 18:58:20.074728 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerDied","Data":"3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0"} Jan 28 18:58:21 crc kubenswrapper[4985]: I0128 18:58:21.095752 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerStarted","Data":"2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f"} Jan 28 18:58:21 crc kubenswrapper[4985]: I0128 18:58:21.124023 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zq8sk" podStartSLOduration=3.611980361 podStartE2EDuration="6.123980732s" podCreationTimestamp="2026-01-28 18:58:15 +0000 UTC" firstStartedPulling="2026-01-28 18:58:18.047604096 +0000 UTC m=+2708.874166917" lastFinishedPulling="2026-01-28 18:58:20.559604467 +0000 UTC m=+2711.386167288" observedRunningTime="2026-01-28 18:58:21.117359725 +0000 UTC m=+2711.943922556" watchObservedRunningTime="2026-01-28 18:58:21.123980732 +0000 UTC m=+2711.950543563" Jan 28 18:58:26 crc kubenswrapper[4985]: I0128 18:58:26.237546 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:26 crc kubenswrapper[4985]: I0128 18:58:26.238088 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:26 crc kubenswrapper[4985]: I0128 18:58:26.289916 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:27 crc kubenswrapper[4985]: I0128 18:58:27.206427 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:27 crc kubenswrapper[4985]: I0128 18:58:27.259732 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zq8sk"] Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.177460 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zq8sk" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="registry-server" containerID="cri-o://2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f" gracePeriod=2 Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.696875 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.809987 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-catalog-content\") pod \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.810408 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-utilities\") pod \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.810461 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh9ld\" (UniqueName: \"kubernetes.io/projected/50eaf46c-c5a3-45ec-98bb-0a22105daf95-kube-api-access-lh9ld\") pod \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.811096 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-utilities" (OuterVolumeSpecName: "utilities") pod "50eaf46c-c5a3-45ec-98bb-0a22105daf95" (UID: "50eaf46c-c5a3-45ec-98bb-0a22105daf95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.811493 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.825047 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50eaf46c-c5a3-45ec-98bb-0a22105daf95-kube-api-access-lh9ld" (OuterVolumeSpecName: "kube-api-access-lh9ld") pod "50eaf46c-c5a3-45ec-98bb-0a22105daf95" (UID: "50eaf46c-c5a3-45ec-98bb-0a22105daf95"). InnerVolumeSpecName "kube-api-access-lh9ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.913893 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh9ld\" (UniqueName: \"kubernetes.io/projected/50eaf46c-c5a3-45ec-98bb-0a22105daf95-kube-api-access-lh9ld\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.188833 4985 generic.go:334] "Generic (PLEG): container finished" podID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerID="2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f" exitCode=0 Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.188885 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerDied","Data":"2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f"} Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.188926 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerDied","Data":"d8ba6f044075ced785fa9cc45c5e2817c626522b7cd0479bc64d80543a554620"} Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.188965 4985 scope.go:117] "RemoveContainer" containerID="2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.188964 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.211661 4985 scope.go:117] "RemoveContainer" containerID="3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.235665 4985 scope.go:117] "RemoveContainer" containerID="6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.296671 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50eaf46c-c5a3-45ec-98bb-0a22105daf95" (UID: "50eaf46c-c5a3-45ec-98bb-0a22105daf95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.307638 4985 scope.go:117] "RemoveContainer" containerID="2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f" Jan 28 18:58:30 crc kubenswrapper[4985]: E0128 18:58:30.308298 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f\": container with ID starting with 2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f not found: ID does not exist" containerID="2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.308347 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f"} err="failed to get container status \"2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f\": rpc error: code = NotFound desc = could not find container \"2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f\": container with ID starting with 2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f not found: ID does not exist" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.308375 4985 scope.go:117] "RemoveContainer" containerID="3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0" Jan 28 18:58:30 crc kubenswrapper[4985]: E0128 18:58:30.308953 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0\": container with ID starting with 3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0 not found: ID does not exist" containerID="3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.309180 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0"} err="failed to get container status \"3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0\": rpc error: code = NotFound desc = could not find container \"3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0\": container with ID starting with 3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0 not found: ID does not exist" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.309200 4985 scope.go:117] "RemoveContainer" containerID="6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765" Jan 28 18:58:30 crc kubenswrapper[4985]: E0128 18:58:30.311357 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765\": container with ID starting with 6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765 not found: ID does not exist" containerID="6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.311404 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765"} err="failed to get container status \"6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765\": rpc error: code = NotFound desc = could not find container \"6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765\": container with ID starting with 6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765 not found: ID does not exist" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.328790 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.526183 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zq8sk"] Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.536288 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zq8sk"] Jan 28 18:58:31 crc kubenswrapper[4985]: I0128 18:58:31.280352 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" path="/var/lib/kubelet/pods/50eaf46c-c5a3-45ec-98bb-0a22105daf95/volumes" Jan 28 18:58:41 crc kubenswrapper[4985]: I0128 18:58:41.186733 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:58:41 crc kubenswrapper[4985]: I0128 18:58:41.187427 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.185611 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.186187 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.186241 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.187211 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a8c9d2caebf9577d32e5d0f94fe2ab9bc2dff20b5b793ce82c0ec429e6181e4"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.187294 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://5a8c9d2caebf9577d32e5d0f94fe2ab9bc2dff20b5b793ce82c0ec429e6181e4" gracePeriod=600 Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.680018 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="5a8c9d2caebf9577d32e5d0f94fe2ab9bc2dff20b5b793ce82c0ec429e6181e4" exitCode=0 Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.680646 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"5a8c9d2caebf9577d32e5d0f94fe2ab9bc2dff20b5b793ce82c0ec429e6181e4"} Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.680691 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb"} Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.680711 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:59:43 crc kubenswrapper[4985]: I0128 18:59:43.039223 4985 generic.go:334] "Generic (PLEG): container finished" podID="05f3f537-0392-45c7-af0d-36294670ed29" containerID="0cd11f134e26fd5286a737ab22f5900bfc3ffc7a04b1b4a5333939680ca416d2" exitCode=0 Jan 28 18:59:43 crc kubenswrapper[4985]: I0128 18:59:43.039286 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" event={"ID":"05f3f537-0392-45c7-af0d-36294670ed29","Type":"ContainerDied","Data":"0cd11f134e26fd5286a737ab22f5900bfc3ffc7a04b1b4a5333939680ca416d2"} Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.702890 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.797005 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvwzc\" (UniqueName: \"kubernetes.io/projected/05f3f537-0392-45c7-af0d-36294670ed29-kube-api-access-tvwzc\") pod \"05f3f537-0392-45c7-af0d-36294670ed29\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.797129 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-combined-ca-bundle\") pod \"05f3f537-0392-45c7-af0d-36294670ed29\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.797278 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-secret-0\") pod \"05f3f537-0392-45c7-af0d-36294670ed29\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.797485 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-inventory\") pod \"05f3f537-0392-45c7-af0d-36294670ed29\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.797547 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-ssh-key-openstack-edpm-ipam\") pod \"05f3f537-0392-45c7-af0d-36294670ed29\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.803452 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "05f3f537-0392-45c7-af0d-36294670ed29" (UID: "05f3f537-0392-45c7-af0d-36294670ed29"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.805678 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f3f537-0392-45c7-af0d-36294670ed29-kube-api-access-tvwzc" (OuterVolumeSpecName: "kube-api-access-tvwzc") pod "05f3f537-0392-45c7-af0d-36294670ed29" (UID: "05f3f537-0392-45c7-af0d-36294670ed29"). InnerVolumeSpecName "kube-api-access-tvwzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.834488 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "05f3f537-0392-45c7-af0d-36294670ed29" (UID: "05f3f537-0392-45c7-af0d-36294670ed29"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.851020 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "05f3f537-0392-45c7-af0d-36294670ed29" (UID: "05f3f537-0392-45c7-af0d-36294670ed29"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.860567 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-inventory" (OuterVolumeSpecName: "inventory") pod "05f3f537-0392-45c7-af0d-36294670ed29" (UID: "05f3f537-0392-45c7-af0d-36294670ed29"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.900958 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.901264 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.901354 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvwzc\" (UniqueName: \"kubernetes.io/projected/05f3f537-0392-45c7-af0d-36294670ed29-kube-api-access-tvwzc\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.901428 4985 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.901577 4985 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.063673 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" event={"ID":"05f3f537-0392-45c7-af0d-36294670ed29","Type":"ContainerDied","Data":"76da32fa2dc8d40e8fb07f71ee0b743aebd23afd91508409999c0fb1c42f6834"} Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.063896 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76da32fa2dc8d40e8fb07f71ee0b743aebd23afd91508409999c0fb1c42f6834" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.063719 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.158150 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4"] Jan 28 18:59:45 crc kubenswrapper[4985]: E0128 18:59:45.158900 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="extract-utilities" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.158947 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="extract-utilities" Jan 28 18:59:45 crc kubenswrapper[4985]: E0128 18:59:45.158982 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="extract-content" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.158995 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="extract-content" Jan 28 18:59:45 crc kubenswrapper[4985]: E0128 18:59:45.159034 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="registry-server" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.159046 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="registry-server" Jan 28 18:59:45 crc kubenswrapper[4985]: E0128 18:59:45.159111 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f3f537-0392-45c7-af0d-36294670ed29" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.159127 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f3f537-0392-45c7-af0d-36294670ed29" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.159621 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="registry-server" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.159695 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f3f537-0392-45c7-af0d-36294670ed29" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.161165 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.167900 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.167901 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.167911 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.168555 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.168578 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.168590 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.168618 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.179016 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4"] Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.310739 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt5sc\" (UniqueName: \"kubernetes.io/projected/b129af39-361b-4dba-bdbb-31531c3a2ce9-kube-api-access-mt5sc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.310886 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.310930 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.310954 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.311133 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.311187 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.311380 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.311573 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.311650 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414097 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414167 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414209 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414237 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414327 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414435 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414487 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414580 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt5sc\" (UniqueName: \"kubernetes.io/projected/b129af39-361b-4dba-bdbb-31531c3a2ce9-kube-api-access-mt5sc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414632 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.416542 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.420894 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.421087 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.421279 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.422438 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.423891 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.424139 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.428551 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.433889 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt5sc\" (UniqueName: \"kubernetes.io/projected/b129af39-361b-4dba-bdbb-31531c3a2ce9-kube-api-access-mt5sc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.486935 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:46 crc kubenswrapper[4985]: I0128 18:59:46.101780 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4"] Jan 28 18:59:46 crc kubenswrapper[4985]: W0128 18:59:46.105811 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb129af39_361b_4dba_bdbb_31531c3a2ce9.slice/crio-3b594d4eee4b54c3372cab8ba60d4c1ef200410a74a95e06ae052c59e590055c WatchSource:0}: Error finding container 3b594d4eee4b54c3372cab8ba60d4c1ef200410a74a95e06ae052c59e590055c: Status 404 returned error can't find the container with id 3b594d4eee4b54c3372cab8ba60d4c1ef200410a74a95e06ae052c59e590055c Jan 28 18:59:47 crc kubenswrapper[4985]: I0128 18:59:47.084134 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" event={"ID":"b129af39-361b-4dba-bdbb-31531c3a2ce9","Type":"ContainerStarted","Data":"3b594d4eee4b54c3372cab8ba60d4c1ef200410a74a95e06ae052c59e590055c"} Jan 28 18:59:48 crc kubenswrapper[4985]: I0128 18:59:48.101881 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" event={"ID":"b129af39-361b-4dba-bdbb-31531c3a2ce9","Type":"ContainerStarted","Data":"0b6a7ce57d1549ccd7fcb1e692f7f4ffc2788f4699e60c9a7fdd7e7e4ae4777e"} Jan 28 18:59:48 crc kubenswrapper[4985]: I0128 18:59:48.129648 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" podStartSLOduration=2.611628895 podStartE2EDuration="3.129624165s" podCreationTimestamp="2026-01-28 18:59:45 +0000 UTC" firstStartedPulling="2026-01-28 18:59:46.108495427 +0000 UTC m=+2796.935058248" lastFinishedPulling="2026-01-28 18:59:46.626490687 +0000 UTC m=+2797.453053518" observedRunningTime="2026-01-28 18:59:48.122772921 +0000 UTC m=+2798.949335762" watchObservedRunningTime="2026-01-28 18:59:48.129624165 +0000 UTC m=+2798.956186996" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.147578 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw"] Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.149825 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.152052 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.153536 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.162063 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw"] Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.177987 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-secret-volume\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.178216 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tql8p\" (UniqueName: \"kubernetes.io/projected/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-kube-api-access-tql8p\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.178322 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-config-volume\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.280787 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-config-volume\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.280949 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-secret-volume\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.281191 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tql8p\" (UniqueName: \"kubernetes.io/projected/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-kube-api-access-tql8p\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.281663 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-config-volume\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.303401 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-secret-volume\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.312518 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tql8p\" (UniqueName: \"kubernetes.io/projected/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-kube-api-access-tql8p\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.484099 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:01 crc kubenswrapper[4985]: I0128 19:00:01.050787 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw"] Jan 28 19:00:01 crc kubenswrapper[4985]: I0128 19:00:01.240140 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" event={"ID":"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1","Type":"ContainerStarted","Data":"fc36e8e83ce2dcdbad3b7ac3097968106477e97a9a58431ad0304a2bcaebdce7"} Jan 28 19:00:01 crc kubenswrapper[4985]: I0128 19:00:01.240508 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" event={"ID":"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1","Type":"ContainerStarted","Data":"8a2fca129e9b5d437fa5b8e4e2a0cbbbc5bd4bd1ae2fbcd231460f8b55032a52"} Jan 28 19:00:01 crc kubenswrapper[4985]: I0128 19:00:01.268930 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" podStartSLOduration=1.268910118 podStartE2EDuration="1.268910118s" podCreationTimestamp="2026-01-28 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:00:01.258036691 +0000 UTC m=+2812.084599522" watchObservedRunningTime="2026-01-28 19:00:01.268910118 +0000 UTC m=+2812.095472939" Jan 28 19:00:02 crc kubenswrapper[4985]: I0128 19:00:02.255621 4985 generic.go:334] "Generic (PLEG): container finished" podID="322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" containerID="fc36e8e83ce2dcdbad3b7ac3097968106477e97a9a58431ad0304a2bcaebdce7" exitCode=0 Jan 28 19:00:02 crc kubenswrapper[4985]: I0128 19:00:02.255693 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" event={"ID":"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1","Type":"ContainerDied","Data":"fc36e8e83ce2dcdbad3b7ac3097968106477e97a9a58431ad0304a2bcaebdce7"} Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.726577 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.764629 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-config-volume\") pod \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.764679 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tql8p\" (UniqueName: \"kubernetes.io/projected/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-kube-api-access-tql8p\") pod \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.764996 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-secret-volume\") pod \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.769644 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-config-volume" (OuterVolumeSpecName: "config-volume") pod "322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" (UID: "322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.772729 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" (UID: "322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.787046 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-kube-api-access-tql8p" (OuterVolumeSpecName: "kube-api-access-tql8p") pod "322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" (UID: "322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1"). InnerVolumeSpecName "kube-api-access-tql8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.868516 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.868556 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.868583 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tql8p\" (UniqueName: \"kubernetes.io/projected/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-kube-api-access-tql8p\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:04 crc kubenswrapper[4985]: I0128 19:00:04.275571 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" event={"ID":"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1","Type":"ContainerDied","Data":"8a2fca129e9b5d437fa5b8e4e2a0cbbbc5bd4bd1ae2fbcd231460f8b55032a52"} Jan 28 19:00:04 crc kubenswrapper[4985]: I0128 19:00:04.275872 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a2fca129e9b5d437fa5b8e4e2a0cbbbc5bd4bd1ae2fbcd231460f8b55032a52" Jan 28 19:00:04 crc kubenswrapper[4985]: I0128 19:00:04.275614 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:04 crc kubenswrapper[4985]: I0128 19:00:04.339812 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57"] Jan 28 19:00:04 crc kubenswrapper[4985]: I0128 19:00:04.351544 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57"] Jan 28 19:00:05 crc kubenswrapper[4985]: I0128 19:00:05.287115 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1030ed14-9fc1-4ec9-a93c-13eab69320ae" path="/var/lib/kubelet/pods/1030ed14-9fc1-4ec9-a93c-13eab69320ae/volumes" Jan 28 19:00:41 crc kubenswrapper[4985]: I0128 19:00:41.695359 4985 scope.go:117] "RemoveContainer" containerID="437ea022ca695dd3c8be1cbb1b44f690df361a980e7c2eb2985b0f8b38dc9e0c" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.164580 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29493781-6kphz"] Jan 28 19:01:00 crc kubenswrapper[4985]: E0128 19:01:00.165812 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" containerName="collect-profiles" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.165834 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" containerName="collect-profiles" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.166192 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" containerName="collect-profiles" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.167346 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.180385 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493781-6kphz"] Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.288475 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-config-data\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.288988 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-fernet-keys\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.289042 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmrz\" (UniqueName: \"kubernetes.io/projected/7635ee1a-7676-44ad-af7f-ebfab7b56933-kube-api-access-rcmrz\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.289121 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-combined-ca-bundle\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.392180 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-fernet-keys\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.392228 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcmrz\" (UniqueName: \"kubernetes.io/projected/7635ee1a-7676-44ad-af7f-ebfab7b56933-kube-api-access-rcmrz\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.392313 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-combined-ca-bundle\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.392421 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-config-data\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.400182 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-combined-ca-bundle\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.404220 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-fernet-keys\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.405088 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-config-data\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.412736 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcmrz\" (UniqueName: \"kubernetes.io/projected/7635ee1a-7676-44ad-af7f-ebfab7b56933-kube-api-access-rcmrz\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.494323 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:01 crc kubenswrapper[4985]: I0128 19:01:01.012550 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493781-6kphz"] Jan 28 19:01:01 crc kubenswrapper[4985]: W0128 19:01:01.022593 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7635ee1a_7676_44ad_af7f_ebfab7b56933.slice/crio-afb7610c275439d6dad3a63793eb281b4d96af700b47d290bb5ab634a053a1db WatchSource:0}: Error finding container afb7610c275439d6dad3a63793eb281b4d96af700b47d290bb5ab634a053a1db: Status 404 returned error can't find the container with id afb7610c275439d6dad3a63793eb281b4d96af700b47d290bb5ab634a053a1db Jan 28 19:01:01 crc kubenswrapper[4985]: I0128 19:01:01.912355 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493781-6kphz" event={"ID":"7635ee1a-7676-44ad-af7f-ebfab7b56933","Type":"ContainerStarted","Data":"f86670ac3325122c583d2e8a88920c9a20e9a32076d431e392e60b06070ddc47"} Jan 28 19:01:01 crc kubenswrapper[4985]: I0128 19:01:01.912694 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493781-6kphz" event={"ID":"7635ee1a-7676-44ad-af7f-ebfab7b56933","Type":"ContainerStarted","Data":"afb7610c275439d6dad3a63793eb281b4d96af700b47d290bb5ab634a053a1db"} Jan 28 19:01:01 crc kubenswrapper[4985]: I0128 19:01:01.960754 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29493781-6kphz" podStartSLOduration=1.96073026 podStartE2EDuration="1.96073026s" podCreationTimestamp="2026-01-28 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:01:01.935177088 +0000 UTC m=+2872.761739909" watchObservedRunningTime="2026-01-28 19:01:01.96073026 +0000 UTC m=+2872.787293091" Jan 28 19:01:04 crc kubenswrapper[4985]: I0128 19:01:04.953200 4985 generic.go:334] "Generic (PLEG): container finished" podID="7635ee1a-7676-44ad-af7f-ebfab7b56933" containerID="f86670ac3325122c583d2e8a88920c9a20e9a32076d431e392e60b06070ddc47" exitCode=0 Jan 28 19:01:04 crc kubenswrapper[4985]: I0128 19:01:04.953276 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493781-6kphz" event={"ID":"7635ee1a-7676-44ad-af7f-ebfab7b56933","Type":"ContainerDied","Data":"f86670ac3325122c583d2e8a88920c9a20e9a32076d431e392e60b06070ddc47"} Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.383673 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.495630 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcmrz\" (UniqueName: \"kubernetes.io/projected/7635ee1a-7676-44ad-af7f-ebfab7b56933-kube-api-access-rcmrz\") pod \"7635ee1a-7676-44ad-af7f-ebfab7b56933\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.495688 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-combined-ca-bundle\") pod \"7635ee1a-7676-44ad-af7f-ebfab7b56933\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.495824 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-fernet-keys\") pod \"7635ee1a-7676-44ad-af7f-ebfab7b56933\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.495859 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-config-data\") pod \"7635ee1a-7676-44ad-af7f-ebfab7b56933\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.501628 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7635ee1a-7676-44ad-af7f-ebfab7b56933-kube-api-access-rcmrz" (OuterVolumeSpecName: "kube-api-access-rcmrz") pod "7635ee1a-7676-44ad-af7f-ebfab7b56933" (UID: "7635ee1a-7676-44ad-af7f-ebfab7b56933"). InnerVolumeSpecName "kube-api-access-rcmrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.502147 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7635ee1a-7676-44ad-af7f-ebfab7b56933" (UID: "7635ee1a-7676-44ad-af7f-ebfab7b56933"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.542382 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7635ee1a-7676-44ad-af7f-ebfab7b56933" (UID: "7635ee1a-7676-44ad-af7f-ebfab7b56933"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.559510 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-config-data" (OuterVolumeSpecName: "config-data") pod "7635ee1a-7676-44ad-af7f-ebfab7b56933" (UID: "7635ee1a-7676-44ad-af7f-ebfab7b56933"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.599325 4985 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.599359 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.599369 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcmrz\" (UniqueName: \"kubernetes.io/projected/7635ee1a-7676-44ad-af7f-ebfab7b56933-kube-api-access-rcmrz\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.599380 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.974163 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493781-6kphz" event={"ID":"7635ee1a-7676-44ad-af7f-ebfab7b56933","Type":"ContainerDied","Data":"afb7610c275439d6dad3a63793eb281b4d96af700b47d290bb5ab634a053a1db"} Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.974504 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afb7610c275439d6dad3a63793eb281b4d96af700b47d290bb5ab634a053a1db" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.974232 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:11 crc kubenswrapper[4985]: I0128 19:01:11.186789 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:01:11 crc kubenswrapper[4985]: I0128 19:01:11.187536 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:01:15 crc kubenswrapper[4985]: I0128 19:01:15.965907 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-trpsd"] Jan 28 19:01:15 crc kubenswrapper[4985]: E0128 19:01:15.967559 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7635ee1a-7676-44ad-af7f-ebfab7b56933" containerName="keystone-cron" Jan 28 19:01:15 crc kubenswrapper[4985]: I0128 19:01:15.967579 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7635ee1a-7676-44ad-af7f-ebfab7b56933" containerName="keystone-cron" Jan 28 19:01:15 crc kubenswrapper[4985]: I0128 19:01:15.967966 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7635ee1a-7676-44ad-af7f-ebfab7b56933" containerName="keystone-cron" Jan 28 19:01:15 crc kubenswrapper[4985]: I0128 19:01:15.972155 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:15 crc kubenswrapper[4985]: I0128 19:01:15.982705 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trpsd"] Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.064160 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr46h\" (UniqueName: \"kubernetes.io/projected/d8975c23-346a-478b-b671-42564f301319-kube-api-access-gr46h\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.064523 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-catalog-content\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.064654 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-utilities\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.167791 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr46h\" (UniqueName: \"kubernetes.io/projected/d8975c23-346a-478b-b671-42564f301319-kube-api-access-gr46h\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.167898 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-catalog-content\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.167934 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-utilities\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.168445 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-utilities\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.168510 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-catalog-content\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.188009 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr46h\" (UniqueName: \"kubernetes.io/projected/d8975c23-346a-478b-b671-42564f301319-kube-api-access-gr46h\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.303379 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.878012 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trpsd"] Jan 28 19:01:17 crc kubenswrapper[4985]: I0128 19:01:17.080672 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerStarted","Data":"faaa09621c13f7869b093d973f48110c892e2f5b743c15f112d4392d8754104e"} Jan 28 19:01:18 crc kubenswrapper[4985]: I0128 19:01:18.093777 4985 generic.go:334] "Generic (PLEG): container finished" podID="d8975c23-346a-478b-b671-42564f301319" containerID="3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e" exitCode=0 Jan 28 19:01:18 crc kubenswrapper[4985]: I0128 19:01:18.095226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerDied","Data":"3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e"} Jan 28 19:01:21 crc kubenswrapper[4985]: I0128 19:01:21.151983 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerStarted","Data":"84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640"} Jan 28 19:01:24 crc kubenswrapper[4985]: I0128 19:01:24.192090 4985 generic.go:334] "Generic (PLEG): container finished" podID="d8975c23-346a-478b-b671-42564f301319" containerID="84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640" exitCode=0 Jan 28 19:01:24 crc kubenswrapper[4985]: I0128 19:01:24.192178 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerDied","Data":"84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640"} Jan 28 19:01:25 crc kubenswrapper[4985]: I0128 19:01:25.205264 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerStarted","Data":"c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67"} Jan 28 19:01:25 crc kubenswrapper[4985]: I0128 19:01:25.227266 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-trpsd" podStartSLOduration=3.533151823 podStartE2EDuration="10.227226104s" podCreationTimestamp="2026-01-28 19:01:15 +0000 UTC" firstStartedPulling="2026-01-28 19:01:18.097856722 +0000 UTC m=+2888.924419543" lastFinishedPulling="2026-01-28 19:01:24.791931003 +0000 UTC m=+2895.618493824" observedRunningTime="2026-01-28 19:01:25.223230511 +0000 UTC m=+2896.049793332" watchObservedRunningTime="2026-01-28 19:01:25.227226104 +0000 UTC m=+2896.053788925" Jan 28 19:01:26 crc kubenswrapper[4985]: I0128 19:01:26.303523 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:26 crc kubenswrapper[4985]: I0128 19:01:26.303839 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:27 crc kubenswrapper[4985]: I0128 19:01:27.366536 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-trpsd" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="registry-server" probeResult="failure" output=< Jan 28 19:01:27 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:01:27 crc kubenswrapper[4985]: > Jan 28 19:01:36 crc kubenswrapper[4985]: I0128 19:01:36.362013 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:36 crc kubenswrapper[4985]: I0128 19:01:36.414862 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:36 crc kubenswrapper[4985]: I0128 19:01:36.604762 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trpsd"] Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.335983 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-trpsd" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="registry-server" containerID="cri-o://c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67" gracePeriod=2 Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.857722 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.942722 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr46h\" (UniqueName: \"kubernetes.io/projected/d8975c23-346a-478b-b671-42564f301319-kube-api-access-gr46h\") pod \"d8975c23-346a-478b-b671-42564f301319\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.942790 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-catalog-content\") pod \"d8975c23-346a-478b-b671-42564f301319\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.943011 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-utilities\") pod \"d8975c23-346a-478b-b671-42564f301319\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.943809 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-utilities" (OuterVolumeSpecName: "utilities") pod "d8975c23-346a-478b-b671-42564f301319" (UID: "d8975c23-346a-478b-b671-42564f301319"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.948337 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8975c23-346a-478b-b671-42564f301319-kube-api-access-gr46h" (OuterVolumeSpecName: "kube-api-access-gr46h") pod "d8975c23-346a-478b-b671-42564f301319" (UID: "d8975c23-346a-478b-b671-42564f301319"). InnerVolumeSpecName "kube-api-access-gr46h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.999166 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8975c23-346a-478b-b671-42564f301319" (UID: "d8975c23-346a-478b-b671-42564f301319"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.045496 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr46h\" (UniqueName: \"kubernetes.io/projected/d8975c23-346a-478b-b671-42564f301319-kube-api-access-gr46h\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.045526 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.045535 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.350381 4985 generic.go:334] "Generic (PLEG): container finished" podID="d8975c23-346a-478b-b671-42564f301319" containerID="c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67" exitCode=0 Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.350431 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerDied","Data":"c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67"} Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.350462 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerDied","Data":"faaa09621c13f7869b093d973f48110c892e2f5b743c15f112d4392d8754104e"} Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.350471 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.350490 4985 scope.go:117] "RemoveContainer" containerID="c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.382574 4985 scope.go:117] "RemoveContainer" containerID="84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.393133 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trpsd"] Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.406447 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-trpsd"] Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.421821 4985 scope.go:117] "RemoveContainer" containerID="3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.490870 4985 scope.go:117] "RemoveContainer" containerID="c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67" Jan 28 19:01:39 crc kubenswrapper[4985]: E0128 19:01:39.491763 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67\": container with ID starting with c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67 not found: ID does not exist" containerID="c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.491791 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67"} err="failed to get container status \"c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67\": rpc error: code = NotFound desc = could not find container \"c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67\": container with ID starting with c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67 not found: ID does not exist" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.491818 4985 scope.go:117] "RemoveContainer" containerID="84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640" Jan 28 19:01:39 crc kubenswrapper[4985]: E0128 19:01:39.492303 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640\": container with ID starting with 84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640 not found: ID does not exist" containerID="84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.492372 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640"} err="failed to get container status \"84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640\": rpc error: code = NotFound desc = could not find container \"84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640\": container with ID starting with 84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640 not found: ID does not exist" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.492424 4985 scope.go:117] "RemoveContainer" containerID="3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e" Jan 28 19:01:39 crc kubenswrapper[4985]: E0128 19:01:39.492876 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e\": container with ID starting with 3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e not found: ID does not exist" containerID="3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.492939 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e"} err="failed to get container status \"3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e\": rpc error: code = NotFound desc = could not find container \"3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e\": container with ID starting with 3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e not found: ID does not exist" Jan 28 19:01:41 crc kubenswrapper[4985]: I0128 19:01:41.187097 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:01:41 crc kubenswrapper[4985]: I0128 19:01:41.187387 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:01:41 crc kubenswrapper[4985]: I0128 19:01:41.275880 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8975c23-346a-478b-b671-42564f301319" path="/var/lib/kubelet/pods/d8975c23-346a-478b-b671-42564f301319/volumes" Jan 28 19:01:59 crc kubenswrapper[4985]: I0128 19:01:59.574628 4985 generic.go:334] "Generic (PLEG): container finished" podID="b129af39-361b-4dba-bdbb-31531c3a2ce9" containerID="0b6a7ce57d1549ccd7fcb1e692f7f4ffc2788f4699e60c9a7fdd7e7e4ae4777e" exitCode=0 Jan 28 19:01:59 crc kubenswrapper[4985]: I0128 19:01:59.574971 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" event={"ID":"b129af39-361b-4dba-bdbb-31531c3a2ce9","Type":"ContainerDied","Data":"0b6a7ce57d1549ccd7fcb1e692f7f4ffc2788f4699e60c9a7fdd7e7e4ae4777e"} Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.099974 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109006 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt5sc\" (UniqueName: \"kubernetes.io/projected/b129af39-361b-4dba-bdbb-31531c3a2ce9-kube-api-access-mt5sc\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109054 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-extra-config-0\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109103 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-inventory\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109125 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-ssh-key-openstack-edpm-ipam\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109173 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-0\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109199 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-1\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109239 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-combined-ca-bundle\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109310 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-0\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109360 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-1\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.121156 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.139071 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b129af39-361b-4dba-bdbb-31531c3a2ce9-kube-api-access-mt5sc" (OuterVolumeSpecName: "kube-api-access-mt5sc") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "kube-api-access-mt5sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.181726 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.184511 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.186893 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.189193 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-inventory" (OuterVolumeSpecName: "inventory") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.201432 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.206822 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.216641 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218129 4985 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218306 4985 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218384 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt5sc\" (UniqueName: \"kubernetes.io/projected/b129af39-361b-4dba-bdbb-31531c3a2ce9-kube-api-access-mt5sc\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218466 4985 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218541 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218655 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218730 4985 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218799 4985 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218871 4985 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.596041 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" event={"ID":"b129af39-361b-4dba-bdbb-31531c3a2ce9","Type":"ContainerDied","Data":"3b594d4eee4b54c3372cab8ba60d4c1ef200410a74a95e06ae052c59e590055c"} Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.596389 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b594d4eee4b54c3372cab8ba60d4c1ef200410a74a95e06ae052c59e590055c" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.596138 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.708229 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq"] Jan 28 19:02:01 crc kubenswrapper[4985]: E0128 19:02:01.708811 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b129af39-361b-4dba-bdbb-31531c3a2ce9" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.708839 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b129af39-361b-4dba-bdbb-31531c3a2ce9" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 19:02:01 crc kubenswrapper[4985]: E0128 19:02:01.708866 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="extract-content" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.708874 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="extract-content" Jan 28 19:02:01 crc kubenswrapper[4985]: E0128 19:02:01.708889 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="extract-utilities" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.708896 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="extract-utilities" Jan 28 19:02:01 crc kubenswrapper[4985]: E0128 19:02:01.708932 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="registry-server" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.708941 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="registry-server" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.709235 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b129af39-361b-4dba-bdbb-31531c3a2ce9" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.709305 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="registry-server" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.710333 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.713657 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.713930 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.714198 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.714528 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.714701 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.764499 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq"] Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766181 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766244 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766405 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766502 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766554 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766833 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766888 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz68d\" (UniqueName: \"kubernetes.io/projected/557f8a1e-1a37-47a3-aa41-7222181ea137-kube-api-access-gz68d\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869223 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869323 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz68d\" (UniqueName: \"kubernetes.io/projected/557f8a1e-1a37-47a3-aa41-7222181ea137-kube-api-access-gz68d\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869396 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869452 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869492 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869566 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869604 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.873628 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.873845 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.874164 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.875028 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.875126 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.875503 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.899982 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz68d\" (UniqueName: \"kubernetes.io/projected/557f8a1e-1a37-47a3-aa41-7222181ea137-kube-api-access-gz68d\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:02 crc kubenswrapper[4985]: I0128 19:02:02.072238 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:02 crc kubenswrapper[4985]: I0128 19:02:02.652648 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq"] Jan 28 19:02:03 crc kubenswrapper[4985]: I0128 19:02:03.626631 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" event={"ID":"557f8a1e-1a37-47a3-aa41-7222181ea137","Type":"ContainerStarted","Data":"8b17231f5ddf8a4fcdf6edbbf7bfe5301dfb0efab463adfdf2cac11011e5b761"} Jan 28 19:02:03 crc kubenswrapper[4985]: I0128 19:02:03.627900 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" event={"ID":"557f8a1e-1a37-47a3-aa41-7222181ea137","Type":"ContainerStarted","Data":"5c9ce223c2123209d1a3f6e2f8a810235bf69b9d7616933eef101635da4de2e3"} Jan 28 19:02:03 crc kubenswrapper[4985]: I0128 19:02:03.653392 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" podStartSLOduration=2.24544909 podStartE2EDuration="2.653367426s" podCreationTimestamp="2026-01-28 19:02:01 +0000 UTC" firstStartedPulling="2026-01-28 19:02:02.661552888 +0000 UTC m=+2933.488115709" lastFinishedPulling="2026-01-28 19:02:03.069471224 +0000 UTC m=+2933.896034045" observedRunningTime="2026-01-28 19:02:03.652548443 +0000 UTC m=+2934.479111264" watchObservedRunningTime="2026-01-28 19:02:03.653367426 +0000 UTC m=+2934.479930257" Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.185828 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.186411 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.186461 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.187337 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.187395 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" gracePeriod=600 Jan 28 19:02:11 crc kubenswrapper[4985]: E0128 19:02:11.318375 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.721948 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" exitCode=0 Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.722011 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb"} Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.722397 4985 scope.go:117] "RemoveContainer" containerID="5a8c9d2caebf9577d32e5d0f94fe2ab9bc2dff20b5b793ce82c0ec429e6181e4" Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.723682 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:02:11 crc kubenswrapper[4985]: E0128 19:02:11.726978 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.191297 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fqckw"] Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.193688 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.213687 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fqckw"] Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.241584 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-catalog-content\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.241645 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cppn7\" (UniqueName: \"kubernetes.io/projected/a0c408a3-7c9d-4083-8497-0d63e85a2e75-kube-api-access-cppn7\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.241778 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-utilities\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.344056 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cppn7\" (UniqueName: \"kubernetes.io/projected/a0c408a3-7c9d-4083-8497-0d63e85a2e75-kube-api-access-cppn7\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.344648 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-utilities\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.344801 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-catalog-content\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.345080 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-utilities\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.345245 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-catalog-content\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.386456 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cppn7\" (UniqueName: \"kubernetes.io/projected/a0c408a3-7c9d-4083-8497-0d63e85a2e75-kube-api-access-cppn7\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.512478 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:13 crc kubenswrapper[4985]: I0128 19:02:13.125996 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fqckw"] Jan 28 19:02:13 crc kubenswrapper[4985]: I0128 19:02:13.775884 4985 generic.go:334] "Generic (PLEG): container finished" podID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerID="141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df" exitCode=0 Jan 28 19:02:13 crc kubenswrapper[4985]: I0128 19:02:13.775966 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerDied","Data":"141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df"} Jan 28 19:02:13 crc kubenswrapper[4985]: I0128 19:02:13.776175 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerStarted","Data":"d929c4c8bbc677706ab10198545032bdd49d95e33281d5782ef5fb53e383b1ef"} Jan 28 19:02:16 crc kubenswrapper[4985]: I0128 19:02:16.814802 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerStarted","Data":"154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0"} Jan 28 19:02:21 crc kubenswrapper[4985]: I0128 19:02:21.880038 4985 generic.go:334] "Generic (PLEG): container finished" podID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerID="154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0" exitCode=0 Jan 28 19:02:21 crc kubenswrapper[4985]: I0128 19:02:21.880121 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerDied","Data":"154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0"} Jan 28 19:02:27 crc kubenswrapper[4985]: I0128 19:02:27.266025 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:02:27 crc kubenswrapper[4985]: E0128 19:02:27.266887 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:02:28 crc kubenswrapper[4985]: I0128 19:02:28.035742 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerStarted","Data":"357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a"} Jan 28 19:02:28 crc kubenswrapper[4985]: I0128 19:02:28.065762 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fqckw" podStartSLOduration=2.225900421 podStartE2EDuration="16.065739976s" podCreationTimestamp="2026-01-28 19:02:12 +0000 UTC" firstStartedPulling="2026-01-28 19:02:13.779909128 +0000 UTC m=+2944.606471949" lastFinishedPulling="2026-01-28 19:02:27.619748683 +0000 UTC m=+2958.446311504" observedRunningTime="2026-01-28 19:02:28.054531989 +0000 UTC m=+2958.881094820" watchObservedRunningTime="2026-01-28 19:02:28.065739976 +0000 UTC m=+2958.892302797" Jan 28 19:02:32 crc kubenswrapper[4985]: I0128 19:02:32.512616 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:32 crc kubenswrapper[4985]: I0128 19:02:32.513129 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:32 crc kubenswrapper[4985]: I0128 19:02:32.570043 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:33 crc kubenswrapper[4985]: I0128 19:02:33.125582 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:33 crc kubenswrapper[4985]: I0128 19:02:33.175724 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fqckw"] Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.097448 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fqckw" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="registry-server" containerID="cri-o://357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a" gracePeriod=2 Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.608340 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.672457 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cppn7\" (UniqueName: \"kubernetes.io/projected/a0c408a3-7c9d-4083-8497-0d63e85a2e75-kube-api-access-cppn7\") pod \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.672564 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-utilities\") pod \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.672905 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-catalog-content\") pod \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.673929 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-utilities" (OuterVolumeSpecName: "utilities") pod "a0c408a3-7c9d-4083-8497-0d63e85a2e75" (UID: "a0c408a3-7c9d-4083-8497-0d63e85a2e75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.680146 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c408a3-7c9d-4083-8497-0d63e85a2e75-kube-api-access-cppn7" (OuterVolumeSpecName: "kube-api-access-cppn7") pod "a0c408a3-7c9d-4083-8497-0d63e85a2e75" (UID: "a0c408a3-7c9d-4083-8497-0d63e85a2e75"). InnerVolumeSpecName "kube-api-access-cppn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.731980 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0c408a3-7c9d-4083-8497-0d63e85a2e75" (UID: "a0c408a3-7c9d-4083-8497-0d63e85a2e75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.776407 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.776674 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cppn7\" (UniqueName: \"kubernetes.io/projected/a0c408a3-7c9d-4083-8497-0d63e85a2e75-kube-api-access-cppn7\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.776762 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.108985 4985 generic.go:334] "Generic (PLEG): container finished" podID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerID="357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a" exitCode=0 Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.109040 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerDied","Data":"357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a"} Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.109073 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerDied","Data":"d929c4c8bbc677706ab10198545032bdd49d95e33281d5782ef5fb53e383b1ef"} Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.109091 4985 scope.go:117] "RemoveContainer" containerID="357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.109230 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.137716 4985 scope.go:117] "RemoveContainer" containerID="154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.162808 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fqckw"] Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.172181 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fqckw"] Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.189820 4985 scope.go:117] "RemoveContainer" containerID="141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.234592 4985 scope.go:117] "RemoveContainer" containerID="357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a" Jan 28 19:02:36 crc kubenswrapper[4985]: E0128 19:02:36.235024 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a\": container with ID starting with 357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a not found: ID does not exist" containerID="357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.235066 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a"} err="failed to get container status \"357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a\": rpc error: code = NotFound desc = could not find container \"357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a\": container with ID starting with 357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a not found: ID does not exist" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.235096 4985 scope.go:117] "RemoveContainer" containerID="154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0" Jan 28 19:02:36 crc kubenswrapper[4985]: E0128 19:02:36.235385 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0\": container with ID starting with 154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0 not found: ID does not exist" containerID="154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.235406 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0"} err="failed to get container status \"154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0\": rpc error: code = NotFound desc = could not find container \"154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0\": container with ID starting with 154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0 not found: ID does not exist" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.235423 4985 scope.go:117] "RemoveContainer" containerID="141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df" Jan 28 19:02:36 crc kubenswrapper[4985]: E0128 19:02:36.236033 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df\": container with ID starting with 141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df not found: ID does not exist" containerID="141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.236053 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df"} err="failed to get container status \"141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df\": rpc error: code = NotFound desc = could not find container \"141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df\": container with ID starting with 141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df not found: ID does not exist" Jan 28 19:02:37 crc kubenswrapper[4985]: I0128 19:02:37.277420 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" path="/var/lib/kubelet/pods/a0c408a3-7c9d-4083-8497-0d63e85a2e75/volumes" Jan 28 19:02:38 crc kubenswrapper[4985]: I0128 19:02:38.264419 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:02:38 crc kubenswrapper[4985]: E0128 19:02:38.265042 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:02:53 crc kubenswrapper[4985]: I0128 19:02:53.264214 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:02:53 crc kubenswrapper[4985]: E0128 19:02:53.265370 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:03:05 crc kubenswrapper[4985]: I0128 19:03:05.264384 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:03:05 crc kubenswrapper[4985]: E0128 19:03:05.265177 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:03:20 crc kubenswrapper[4985]: I0128 19:03:20.264382 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:03:20 crc kubenswrapper[4985]: E0128 19:03:20.265316 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:03:31 crc kubenswrapper[4985]: I0128 19:03:31.276512 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:03:31 crc kubenswrapper[4985]: E0128 19:03:31.277169 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:03:44 crc kubenswrapper[4985]: I0128 19:03:44.263840 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:03:44 crc kubenswrapper[4985]: E0128 19:03:44.264639 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:03:56 crc kubenswrapper[4985]: I0128 19:03:56.264119 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:03:56 crc kubenswrapper[4985]: E0128 19:03:56.264890 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:04:07 crc kubenswrapper[4985]: I0128 19:04:07.264681 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:04:07 crc kubenswrapper[4985]: E0128 19:04:07.265566 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:04:21 crc kubenswrapper[4985]: I0128 19:04:21.271216 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:04:21 crc kubenswrapper[4985]: E0128 19:04:21.272074 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:04:27 crc kubenswrapper[4985]: I0128 19:04:27.318522 4985 generic.go:334] "Generic (PLEG): container finished" podID="557f8a1e-1a37-47a3-aa41-7222181ea137" containerID="8b17231f5ddf8a4fcdf6edbbf7bfe5301dfb0efab463adfdf2cac11011e5b761" exitCode=0 Jan 28 19:04:27 crc kubenswrapper[4985]: I0128 19:04:27.318616 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" event={"ID":"557f8a1e-1a37-47a3-aa41-7222181ea137","Type":"ContainerDied","Data":"8b17231f5ddf8a4fcdf6edbbf7bfe5301dfb0efab463adfdf2cac11011e5b761"} Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.839466 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957291 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ssh-key-openstack-edpm-ipam\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957400 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-inventory\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957467 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-telemetry-combined-ca-bundle\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957511 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz68d\" (UniqueName: \"kubernetes.io/projected/557f8a1e-1a37-47a3-aa41-7222181ea137-kube-api-access-gz68d\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957570 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-0\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957687 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-2\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957715 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-1\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.963684 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.968443 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557f8a1e-1a37-47a3-aa41-7222181ea137-kube-api-access-gz68d" (OuterVolumeSpecName: "kube-api-access-gz68d") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "kube-api-access-gz68d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.995529 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-inventory" (OuterVolumeSpecName: "inventory") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.998397 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.002727 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.002765 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.003820 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060626 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060662 4985 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060675 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz68d\" (UniqueName: \"kubernetes.io/projected/557f8a1e-1a37-47a3-aa41-7222181ea137-kube-api-access-gz68d\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060685 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060696 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060704 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060713 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.340191 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" event={"ID":"557f8a1e-1a37-47a3-aa41-7222181ea137","Type":"ContainerDied","Data":"5c9ce223c2123209d1a3f6e2f8a810235bf69b9d7616933eef101635da4de2e3"} Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.340227 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c9ce223c2123209d1a3f6e2f8a810235bf69b9d7616933eef101635da4de2e3" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.340286 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.454412 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls"] Jan 28 19:04:29 crc kubenswrapper[4985]: E0128 19:04:29.454899 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557f8a1e-1a37-47a3-aa41-7222181ea137" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.454916 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="557f8a1e-1a37-47a3-aa41-7222181ea137" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 19:04:29 crc kubenswrapper[4985]: E0128 19:04:29.454951 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="extract-utilities" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.454957 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="extract-utilities" Jan 28 19:04:29 crc kubenswrapper[4985]: E0128 19:04:29.454965 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="registry-server" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.454971 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="registry-server" Jan 28 19:04:29 crc kubenswrapper[4985]: E0128 19:04:29.454983 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="extract-content" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.454988 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="extract-content" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.455224 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="557f8a1e-1a37-47a3-aa41-7222181ea137" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.455262 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="registry-server" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.456080 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.461657 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.461804 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.461884 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.461899 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.462126 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.471961 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls"] Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.572687 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.572768 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.572817 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.572858 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.572888 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxxnf\" (UniqueName: \"kubernetes.io/projected/d9d4a4e3-9f29-45a2-9748-d133f122af06-kube-api-access-wxxnf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.572990 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.573033 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.674829 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.675083 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.675129 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.675168 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.675205 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.675239 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxxnf\" (UniqueName: \"kubernetes.io/projected/d9d4a4e3-9f29-45a2-9748-d133f122af06-kube-api-access-wxxnf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.675465 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.686899 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.687135 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.688047 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.689341 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.692402 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.703174 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxxnf\" (UniqueName: \"kubernetes.io/projected/d9d4a4e3-9f29-45a2-9748-d133f122af06-kube-api-access-wxxnf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.703667 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.780595 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:30 crc kubenswrapper[4985]: I0128 19:04:30.352832 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:04:30 crc kubenswrapper[4985]: I0128 19:04:30.354602 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls"] Jan 28 19:04:31 crc kubenswrapper[4985]: I0128 19:04:31.367991 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" event={"ID":"d9d4a4e3-9f29-45a2-9748-d133f122af06","Type":"ContainerStarted","Data":"8ea8fcb948c015ea73698aa70b25889e81199d3f1076b232700b8bb7c130da10"} Jan 28 19:04:31 crc kubenswrapper[4985]: I0128 19:04:31.368359 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" event={"ID":"d9d4a4e3-9f29-45a2-9748-d133f122af06","Type":"ContainerStarted","Data":"b199812b5cc9cf5d92c4b1353a88e7f0beb570cf77c1f9a72103035686c3c51a"} Jan 28 19:04:31 crc kubenswrapper[4985]: I0128 19:04:31.392673 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" podStartSLOduration=1.965105011 podStartE2EDuration="2.392655982s" podCreationTimestamp="2026-01-28 19:04:29 +0000 UTC" firstStartedPulling="2026-01-28 19:04:30.352513706 +0000 UTC m=+3081.179076527" lastFinishedPulling="2026-01-28 19:04:30.780064677 +0000 UTC m=+3081.606627498" observedRunningTime="2026-01-28 19:04:31.387808824 +0000 UTC m=+3082.214371655" watchObservedRunningTime="2026-01-28 19:04:31.392655982 +0000 UTC m=+3082.219218803" Jan 28 19:04:33 crc kubenswrapper[4985]: I0128 19:04:33.264443 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:04:33 crc kubenswrapper[4985]: E0128 19:04:33.264977 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:04:48 crc kubenswrapper[4985]: I0128 19:04:48.264564 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:04:48 crc kubenswrapper[4985]: E0128 19:04:48.266850 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:05:01 crc kubenswrapper[4985]: I0128 19:05:01.273239 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:05:01 crc kubenswrapper[4985]: E0128 19:05:01.274126 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:05:15 crc kubenswrapper[4985]: I0128 19:05:15.264493 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:05:15 crc kubenswrapper[4985]: E0128 19:05:15.265525 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:05:27 crc kubenswrapper[4985]: I0128 19:05:27.264947 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:05:27 crc kubenswrapper[4985]: E0128 19:05:27.265819 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:05:39 crc kubenswrapper[4985]: I0128 19:05:39.265134 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:05:39 crc kubenswrapper[4985]: E0128 19:05:39.266225 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:05:54 crc kubenswrapper[4985]: I0128 19:05:54.265597 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:05:54 crc kubenswrapper[4985]: E0128 19:05:54.266813 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:06:05 crc kubenswrapper[4985]: I0128 19:06:05.265219 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:06:05 crc kubenswrapper[4985]: E0128 19:06:05.265989 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:06:16 crc kubenswrapper[4985]: I0128 19:06:16.264615 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:06:16 crc kubenswrapper[4985]: E0128 19:06:16.265522 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:06:23 crc kubenswrapper[4985]: I0128 19:06:23.632166 4985 generic.go:334] "Generic (PLEG): container finished" podID="d9d4a4e3-9f29-45a2-9748-d133f122af06" containerID="8ea8fcb948c015ea73698aa70b25889e81199d3f1076b232700b8bb7c130da10" exitCode=0 Jan 28 19:06:23 crc kubenswrapper[4985]: I0128 19:06:23.632859 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" event={"ID":"d9d4a4e3-9f29-45a2-9748-d133f122af06","Type":"ContainerDied","Data":"8ea8fcb948c015ea73698aa70b25889e81199d3f1076b232700b8bb7c130da10"} Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.158609 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.264298 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxxnf\" (UniqueName: \"kubernetes.io/projected/d9d4a4e3-9f29-45a2-9748-d133f122af06-kube-api-access-wxxnf\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.264655 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-inventory\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.264703 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-telemetry-power-monitoring-combined-ca-bundle\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.264783 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ssh-key-openstack-edpm-ipam\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.264953 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-2\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.265056 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-1\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.265085 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-0\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.270265 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.272890 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d4a4e3-9f29-45a2-9748-d133f122af06-kube-api-access-wxxnf" (OuterVolumeSpecName: "kube-api-access-wxxnf") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "kube-api-access-wxxnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.298790 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.306287 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.306345 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.306422 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.312400 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-inventory" (OuterVolumeSpecName: "inventory") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378769 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378795 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378807 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378816 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxxnf\" (UniqueName: \"kubernetes.io/projected/d9d4a4e3-9f29-45a2-9748-d133f122af06-kube-api-access-wxxnf\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378824 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378834 4985 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378842 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.657749 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" event={"ID":"d9d4a4e3-9f29-45a2-9748-d133f122af06","Type":"ContainerDied","Data":"b199812b5cc9cf5d92c4b1353a88e7f0beb570cf77c1f9a72103035686c3c51a"} Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.657794 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b199812b5cc9cf5d92c4b1353a88e7f0beb570cf77c1f9a72103035686c3c51a" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.657810 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.759011 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7"] Jan 28 19:06:25 crc kubenswrapper[4985]: E0128 19:06:25.759563 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d4a4e3-9f29-45a2-9748-d133f122af06" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.759579 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d4a4e3-9f29-45a2-9748-d133f122af06" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.759838 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d4a4e3-9f29-45a2-9748-d133f122af06" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.760628 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.768086 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.768130 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.768845 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.768926 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.769061 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.786687 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.786944 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tmdr\" (UniqueName: \"kubernetes.io/projected/c6c90c6c-aa78-4215-9c43-acd22891abfb-kube-api-access-9tmdr\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.786998 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.787140 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.787224 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.788974 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7"] Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.888426 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.888702 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tmdr\" (UniqueName: \"kubernetes.io/projected/c6c90c6c-aa78-4215-9c43-acd22891abfb-kube-api-access-9tmdr\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.888801 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.888931 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.889049 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.894864 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.894899 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.895093 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.895453 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.905656 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tmdr\" (UniqueName: \"kubernetes.io/projected/c6c90c6c-aa78-4215-9c43-acd22891abfb-kube-api-access-9tmdr\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:26 crc kubenswrapper[4985]: I0128 19:06:26.086083 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:26 crc kubenswrapper[4985]: I0128 19:06:26.669570 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7"] Jan 28 19:06:27 crc kubenswrapper[4985]: I0128 19:06:27.723572 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" event={"ID":"c6c90c6c-aa78-4215-9c43-acd22891abfb","Type":"ContainerStarted","Data":"eebc1fab3fbe6e3bc4d99333108d03286bc86771600f6891902f829e592cdfc4"} Jan 28 19:06:27 crc kubenswrapper[4985]: I0128 19:06:27.725012 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" event={"ID":"c6c90c6c-aa78-4215-9c43-acd22891abfb","Type":"ContainerStarted","Data":"23310972a28ed4e2f0fa6d03c0061ee3ae2e74f087c158d1a566307e4d2f53b6"} Jan 28 19:06:27 crc kubenswrapper[4985]: I0128 19:06:27.753462 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" podStartSLOduration=2.192102215 podStartE2EDuration="2.753442121s" podCreationTimestamp="2026-01-28 19:06:25 +0000 UTC" firstStartedPulling="2026-01-28 19:06:26.668346863 +0000 UTC m=+3197.494909684" lastFinishedPulling="2026-01-28 19:06:27.229686769 +0000 UTC m=+3198.056249590" observedRunningTime="2026-01-28 19:06:27.747789481 +0000 UTC m=+3198.574352322" watchObservedRunningTime="2026-01-28 19:06:27.753442121 +0000 UTC m=+3198.580004942" Jan 28 19:06:29 crc kubenswrapper[4985]: I0128 19:06:29.265290 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:06:29 crc kubenswrapper[4985]: E0128 19:06:29.265637 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:06:41 crc kubenswrapper[4985]: I0128 19:06:41.869539 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6c90c6c-aa78-4215-9c43-acd22891abfb" containerID="eebc1fab3fbe6e3bc4d99333108d03286bc86771600f6891902f829e592cdfc4" exitCode=0 Jan 28 19:06:41 crc kubenswrapper[4985]: I0128 19:06:41.869742 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" event={"ID":"c6c90c6c-aa78-4215-9c43-acd22891abfb","Type":"ContainerDied","Data":"eebc1fab3fbe6e3bc4d99333108d03286bc86771600f6891902f829e592cdfc4"} Jan 28 19:06:42 crc kubenswrapper[4985]: I0128 19:06:42.264432 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:06:42 crc kubenswrapper[4985]: E0128 19:06:42.264825 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.367080 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.410872 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tmdr\" (UniqueName: \"kubernetes.io/projected/c6c90c6c-aa78-4215-9c43-acd22891abfb-kube-api-access-9tmdr\") pod \"c6c90c6c-aa78-4215-9c43-acd22891abfb\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.410936 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-1\") pod \"c6c90c6c-aa78-4215-9c43-acd22891abfb\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.410960 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-ssh-key-openstack-edpm-ipam\") pod \"c6c90c6c-aa78-4215-9c43-acd22891abfb\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.411119 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-0\") pod \"c6c90c6c-aa78-4215-9c43-acd22891abfb\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.411320 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-inventory\") pod \"c6c90c6c-aa78-4215-9c43-acd22891abfb\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.419047 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c90c6c-aa78-4215-9c43-acd22891abfb-kube-api-access-9tmdr" (OuterVolumeSpecName: "kube-api-access-9tmdr") pod "c6c90c6c-aa78-4215-9c43-acd22891abfb" (UID: "c6c90c6c-aa78-4215-9c43-acd22891abfb"). InnerVolumeSpecName "kube-api-access-9tmdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.455197 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "c6c90c6c-aa78-4215-9c43-acd22891abfb" (UID: "c6c90c6c-aa78-4215-9c43-acd22891abfb"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.459538 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c6c90c6c-aa78-4215-9c43-acd22891abfb" (UID: "c6c90c6c-aa78-4215-9c43-acd22891abfb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.460324 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-inventory" (OuterVolumeSpecName: "inventory") pod "c6c90c6c-aa78-4215-9c43-acd22891abfb" (UID: "c6c90c6c-aa78-4215-9c43-acd22891abfb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.463433 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "c6c90c6c-aa78-4215-9c43-acd22891abfb" (UID: "c6c90c6c-aa78-4215-9c43-acd22891abfb"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.514230 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.514283 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tmdr\" (UniqueName: \"kubernetes.io/projected/c6c90c6c-aa78-4215-9c43-acd22891abfb-kube-api-access-9tmdr\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.514297 4985 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.514307 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.514317 4985 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.895312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" event={"ID":"c6c90c6c-aa78-4215-9c43-acd22891abfb","Type":"ContainerDied","Data":"23310972a28ed4e2f0fa6d03c0061ee3ae2e74f087c158d1a566307e4d2f53b6"} Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.895351 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23310972a28ed4e2f0fa6d03c0061ee3ae2e74f087c158d1a566307e4d2f53b6" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.895386 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:55 crc kubenswrapper[4985]: I0128 19:06:55.264614 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:06:55 crc kubenswrapper[4985]: E0128 19:06:55.265695 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:07:08 crc kubenswrapper[4985]: I0128 19:07:08.265160 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:07:08 crc kubenswrapper[4985]: E0128 19:07:08.266192 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:07:22 crc kubenswrapper[4985]: I0128 19:07:22.265907 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:07:23 crc kubenswrapper[4985]: I0128 19:07:23.353099 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"a627b2b579e569c0b043d2fecf15b4dfaeb3f01422dbeb527c4e889676ab53e6"} Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.276288 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vhxbr"] Jan 28 19:08:20 crc kubenswrapper[4985]: E0128 19:08:20.277425 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c90c6c-aa78-4215-9c43-acd22891abfb" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.277443 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c90c6c-aa78-4215-9c43-acd22891abfb" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.277697 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c90c6c-aa78-4215-9c43-acd22891abfb" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.279907 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.286868 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vhxbr"] Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.410334 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68ckh\" (UniqueName: \"kubernetes.io/projected/103d61a7-b2c1-4122-845a-e63c994c8946-kube-api-access-68ckh\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.410535 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-catalog-content\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.410689 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-utilities\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.513557 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-catalog-content\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.513640 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-utilities\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.513782 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68ckh\" (UniqueName: \"kubernetes.io/projected/103d61a7-b2c1-4122-845a-e63c994c8946-kube-api-access-68ckh\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.514214 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-catalog-content\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.514269 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-utilities\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.534227 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68ckh\" (UniqueName: \"kubernetes.io/projected/103d61a7-b2c1-4122-845a-e63c994c8946-kube-api-access-68ckh\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.611911 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:21 crc kubenswrapper[4985]: I0128 19:08:21.112778 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vhxbr"] Jan 28 19:08:22 crc kubenswrapper[4985]: I0128 19:08:22.042115 4985 generic.go:334] "Generic (PLEG): container finished" podID="103d61a7-b2c1-4122-845a-e63c994c8946" containerID="152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070" exitCode=0 Jan 28 19:08:22 crc kubenswrapper[4985]: I0128 19:08:22.042232 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerDied","Data":"152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070"} Jan 28 19:08:22 crc kubenswrapper[4985]: I0128 19:08:22.042489 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerStarted","Data":"8ca1e66c758ccac6692df31c7cc94b8051c203fd6964bbf5f1d0f882e2c52e2e"} Jan 28 19:08:24 crc kubenswrapper[4985]: I0128 19:08:24.067891 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerStarted","Data":"eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488"} Jan 28 19:08:28 crc kubenswrapper[4985]: I0128 19:08:28.114946 4985 generic.go:334] "Generic (PLEG): container finished" podID="103d61a7-b2c1-4122-845a-e63c994c8946" containerID="eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488" exitCode=0 Jan 28 19:08:28 crc kubenswrapper[4985]: I0128 19:08:28.115024 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerDied","Data":"eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488"} Jan 28 19:08:30 crc kubenswrapper[4985]: I0128 19:08:30.139335 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerStarted","Data":"a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8"} Jan 28 19:08:30 crc kubenswrapper[4985]: I0128 19:08:30.160299 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vhxbr" podStartSLOduration=3.107878626 podStartE2EDuration="10.16027622s" podCreationTimestamp="2026-01-28 19:08:20 +0000 UTC" firstStartedPulling="2026-01-28 19:08:22.045467532 +0000 UTC m=+3312.872030353" lastFinishedPulling="2026-01-28 19:08:29.097865126 +0000 UTC m=+3319.924427947" observedRunningTime="2026-01-28 19:08:30.156986287 +0000 UTC m=+3320.983549128" watchObservedRunningTime="2026-01-28 19:08:30.16027622 +0000 UTC m=+3320.986839041" Jan 28 19:08:30 crc kubenswrapper[4985]: I0128 19:08:30.612463 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:30 crc kubenswrapper[4985]: I0128 19:08:30.612873 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:31 crc kubenswrapper[4985]: I0128 19:08:31.668194 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vhxbr" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="registry-server" probeResult="failure" output=< Jan 28 19:08:31 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:08:31 crc kubenswrapper[4985]: > Jan 28 19:08:40 crc kubenswrapper[4985]: I0128 19:08:40.665575 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:40 crc kubenswrapper[4985]: I0128 19:08:40.731092 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:40 crc kubenswrapper[4985]: I0128 19:08:40.913778 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vhxbr"] Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.271780 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vhxbr" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="registry-server" containerID="cri-o://a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8" gracePeriod=2 Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.804145 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.981500 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-catalog-content\") pod \"103d61a7-b2c1-4122-845a-e63c994c8946\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.981936 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68ckh\" (UniqueName: \"kubernetes.io/projected/103d61a7-b2c1-4122-845a-e63c994c8946-kube-api-access-68ckh\") pod \"103d61a7-b2c1-4122-845a-e63c994c8946\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.982139 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-utilities\") pod \"103d61a7-b2c1-4122-845a-e63c994c8946\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.984042 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-utilities" (OuterVolumeSpecName: "utilities") pod "103d61a7-b2c1-4122-845a-e63c994c8946" (UID: "103d61a7-b2c1-4122-845a-e63c994c8946"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.990661 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103d61a7-b2c1-4122-845a-e63c994c8946-kube-api-access-68ckh" (OuterVolumeSpecName: "kube-api-access-68ckh") pod "103d61a7-b2c1-4122-845a-e63c994c8946" (UID: "103d61a7-b2c1-4122-845a-e63c994c8946"). InnerVolumeSpecName "kube-api-access-68ckh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.086398 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68ckh\" (UniqueName: \"kubernetes.io/projected/103d61a7-b2c1-4122-845a-e63c994c8946-kube-api-access-68ckh\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.086446 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.136264 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "103d61a7-b2c1-4122-845a-e63c994c8946" (UID: "103d61a7-b2c1-4122-845a-e63c994c8946"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.190111 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.297688 4985 generic.go:334] "Generic (PLEG): container finished" podID="103d61a7-b2c1-4122-845a-e63c994c8946" containerID="a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8" exitCode=0 Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.297732 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerDied","Data":"a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8"} Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.297759 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerDied","Data":"8ca1e66c758ccac6692df31c7cc94b8051c203fd6964bbf5f1d0f882e2c52e2e"} Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.297760 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.297777 4985 scope.go:117] "RemoveContainer" containerID="a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.339778 4985 scope.go:117] "RemoveContainer" containerID="eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.341530 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vhxbr"] Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.367414 4985 scope.go:117] "RemoveContainer" containerID="152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.388889 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vhxbr"] Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.415362 4985 scope.go:117] "RemoveContainer" containerID="a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8" Jan 28 19:08:43 crc kubenswrapper[4985]: E0128 19:08:43.416686 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8\": container with ID starting with a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8 not found: ID does not exist" containerID="a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.416829 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8"} err="failed to get container status \"a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8\": rpc error: code = NotFound desc = could not find container \"a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8\": container with ID starting with a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8 not found: ID does not exist" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.416947 4985 scope.go:117] "RemoveContainer" containerID="eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488" Jan 28 19:08:43 crc kubenswrapper[4985]: E0128 19:08:43.420872 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488\": container with ID starting with eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488 not found: ID does not exist" containerID="eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.426549 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488"} err="failed to get container status \"eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488\": rpc error: code = NotFound desc = could not find container \"eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488\": container with ID starting with eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488 not found: ID does not exist" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.426615 4985 scope.go:117] "RemoveContainer" containerID="152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070" Jan 28 19:08:43 crc kubenswrapper[4985]: E0128 19:08:43.427242 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070\": container with ID starting with 152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070 not found: ID does not exist" containerID="152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.427681 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070"} err="failed to get container status \"152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070\": rpc error: code = NotFound desc = could not find container \"152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070\": container with ID starting with 152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070 not found: ID does not exist" Jan 28 19:08:43 crc kubenswrapper[4985]: E0128 19:08:43.580622 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod103d61a7_b2c1_4122_845a_e63c994c8946.slice/crio-8ca1e66c758ccac6692df31c7cc94b8051c203fd6964bbf5f1d0f882e2c52e2e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod103d61a7_b2c1_4122_845a_e63c994c8946.slice\": RecentStats: unable to find data in memory cache]" Jan 28 19:08:45 crc kubenswrapper[4985]: I0128 19:08:45.280239 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" path="/var/lib/kubelet/pods/103d61a7-b2c1-4122-845a-e63c994c8946/volumes" Jan 28 19:09:41 crc kubenswrapper[4985]: I0128 19:09:41.186388 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:09:41 crc kubenswrapper[4985]: I0128 19:09:41.186890 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:10:11 crc kubenswrapper[4985]: I0128 19:10:11.185740 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:10:11 crc kubenswrapper[4985]: I0128 19:10:11.187929 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.186211 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.186886 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.186943 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.188010 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a627b2b579e569c0b043d2fecf15b4dfaeb3f01422dbeb527c4e889676ab53e6"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.188086 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://a627b2b579e569c0b043d2fecf15b4dfaeb3f01422dbeb527c4e889676ab53e6" gracePeriod=600 Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.675426 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="a627b2b579e569c0b043d2fecf15b4dfaeb3f01422dbeb527c4e889676ab53e6" exitCode=0 Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.675483 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"a627b2b579e569c0b043d2fecf15b4dfaeb3f01422dbeb527c4e889676ab53e6"} Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.675705 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf"} Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.675739 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.804582 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kqksb"] Jan 28 19:11:02 crc kubenswrapper[4985]: E0128 19:11:02.805878 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="extract-utilities" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.805898 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="extract-utilities" Jan 28 19:11:02 crc kubenswrapper[4985]: E0128 19:11:02.805915 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="extract-content" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.805924 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="extract-content" Jan 28 19:11:02 crc kubenswrapper[4985]: E0128 19:11:02.805939 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="registry-server" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.805947 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="registry-server" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.806244 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="registry-server" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.808262 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.817192 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqksb"] Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.951057 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-catalog-content\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.951139 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-utilities\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.951225 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpvfs\" (UniqueName: \"kubernetes.io/projected/edd68953-5617-46ec-8c09-7189d7dfab9a-kube-api-access-fpvfs\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.054194 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-catalog-content\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.054324 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-utilities\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.054449 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpvfs\" (UniqueName: \"kubernetes.io/projected/edd68953-5617-46ec-8c09-7189d7dfab9a-kube-api-access-fpvfs\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.054747 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-catalog-content\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.054815 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-utilities\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.076238 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpvfs\" (UniqueName: \"kubernetes.io/projected/edd68953-5617-46ec-8c09-7189d7dfab9a-kube-api-access-fpvfs\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.139119 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.735909 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqksb"] Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.954647 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerStarted","Data":"3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7"} Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.954703 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerStarted","Data":"5e7a0066a54de8d6e4d60ae7e1974a56dabafdfacb5cba38824d8a6aa776b194"} Jan 28 19:11:04 crc kubenswrapper[4985]: I0128 19:11:04.966686 4985 generic.go:334] "Generic (PLEG): container finished" podID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerID="3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7" exitCode=0 Jan 28 19:11:04 crc kubenswrapper[4985]: I0128 19:11:04.967233 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerDied","Data":"3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7"} Jan 28 19:11:04 crc kubenswrapper[4985]: I0128 19:11:04.970170 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:11:07 crc kubenswrapper[4985]: I0128 19:11:07.001126 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerStarted","Data":"a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30"} Jan 28 19:11:08 crc kubenswrapper[4985]: I0128 19:11:08.016608 4985 generic.go:334] "Generic (PLEG): container finished" podID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerID="a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30" exitCode=0 Jan 28 19:11:08 crc kubenswrapper[4985]: I0128 19:11:08.016690 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerDied","Data":"a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30"} Jan 28 19:11:09 crc kubenswrapper[4985]: I0128 19:11:09.032125 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerStarted","Data":"0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d"} Jan 28 19:11:09 crc kubenswrapper[4985]: I0128 19:11:09.056407 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kqksb" podStartSLOduration=3.610566189 podStartE2EDuration="7.056389532s" podCreationTimestamp="2026-01-28 19:11:02 +0000 UTC" firstStartedPulling="2026-01-28 19:11:04.969868944 +0000 UTC m=+3475.796431765" lastFinishedPulling="2026-01-28 19:11:08.415692277 +0000 UTC m=+3479.242255108" observedRunningTime="2026-01-28 19:11:09.051978647 +0000 UTC m=+3479.878541468" watchObservedRunningTime="2026-01-28 19:11:09.056389532 +0000 UTC m=+3479.882952353" Jan 28 19:11:13 crc kubenswrapper[4985]: I0128 19:11:13.140325 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:13 crc kubenswrapper[4985]: I0128 19:11:13.140979 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:13 crc kubenswrapper[4985]: I0128 19:11:13.190550 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:14 crc kubenswrapper[4985]: I0128 19:11:14.140160 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:14 crc kubenswrapper[4985]: I0128 19:11:14.220084 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqksb"] Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.117040 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kqksb" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="registry-server" containerID="cri-o://0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d" gracePeriod=2 Jan 28 19:11:16 crc kubenswrapper[4985]: E0128 19:11:16.291330 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedd68953_5617_46ec_8c09_7189d7dfab9a.slice/crio-0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d.scope\": RecentStats: unable to find data in memory cache]" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.695506 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.815554 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpvfs\" (UniqueName: \"kubernetes.io/projected/edd68953-5617-46ec-8c09-7189d7dfab9a-kube-api-access-fpvfs\") pod \"edd68953-5617-46ec-8c09-7189d7dfab9a\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.815622 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-catalog-content\") pod \"edd68953-5617-46ec-8c09-7189d7dfab9a\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.815693 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-utilities\") pod \"edd68953-5617-46ec-8c09-7189d7dfab9a\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.817781 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-utilities" (OuterVolumeSpecName: "utilities") pod "edd68953-5617-46ec-8c09-7189d7dfab9a" (UID: "edd68953-5617-46ec-8c09-7189d7dfab9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.831756 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd68953-5617-46ec-8c09-7189d7dfab9a-kube-api-access-fpvfs" (OuterVolumeSpecName: "kube-api-access-fpvfs") pod "edd68953-5617-46ec-8c09-7189d7dfab9a" (UID: "edd68953-5617-46ec-8c09-7189d7dfab9a"). InnerVolumeSpecName "kube-api-access-fpvfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.839502 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edd68953-5617-46ec-8c09-7189d7dfab9a" (UID: "edd68953-5617-46ec-8c09-7189d7dfab9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.919044 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpvfs\" (UniqueName: \"kubernetes.io/projected/edd68953-5617-46ec-8c09-7189d7dfab9a-kube-api-access-fpvfs\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.919131 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.919142 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.129474 4985 generic.go:334] "Generic (PLEG): container finished" podID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerID="0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d" exitCode=0 Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.129530 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerDied","Data":"0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d"} Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.129615 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerDied","Data":"5e7a0066a54de8d6e4d60ae7e1974a56dabafdfacb5cba38824d8a6aa776b194"} Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.129656 4985 scope.go:117] "RemoveContainer" containerID="0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.130909 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.174734 4985 scope.go:117] "RemoveContainer" containerID="a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.209931 4985 scope.go:117] "RemoveContainer" containerID="3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.215425 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqksb"] Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.234025 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqksb"] Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.265087 4985 scope.go:117] "RemoveContainer" containerID="0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d" Jan 28 19:11:17 crc kubenswrapper[4985]: E0128 19:11:17.265516 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d\": container with ID starting with 0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d not found: ID does not exist" containerID="0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.265547 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d"} err="failed to get container status \"0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d\": rpc error: code = NotFound desc = could not find container \"0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d\": container with ID starting with 0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d not found: ID does not exist" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.265563 4985 scope.go:117] "RemoveContainer" containerID="a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30" Jan 28 19:11:17 crc kubenswrapper[4985]: E0128 19:11:17.265908 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30\": container with ID starting with a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30 not found: ID does not exist" containerID="a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.265944 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30"} err="failed to get container status \"a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30\": rpc error: code = NotFound desc = could not find container \"a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30\": container with ID starting with a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30 not found: ID does not exist" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.265966 4985 scope.go:117] "RemoveContainer" containerID="3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7" Jan 28 19:11:17 crc kubenswrapper[4985]: E0128 19:11:17.266287 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7\": container with ID starting with 3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7 not found: ID does not exist" containerID="3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.266316 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7"} err="failed to get container status \"3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7\": rpc error: code = NotFound desc = could not find container \"3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7\": container with ID starting with 3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7 not found: ID does not exist" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.275611 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" path="/var/lib/kubelet/pods/edd68953-5617-46ec-8c09-7189d7dfab9a/volumes" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.524468 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nmd4h"] Jan 28 19:11:22 crc kubenswrapper[4985]: E0128 19:11:22.527443 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="extract-content" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.527554 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="extract-content" Jan 28 19:11:22 crc kubenswrapper[4985]: E0128 19:11:22.527652 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="extract-utilities" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.527753 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="extract-utilities" Jan 28 19:11:22 crc kubenswrapper[4985]: E0128 19:11:22.527876 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="registry-server" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.527963 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="registry-server" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.528399 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="registry-server" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.531114 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.548098 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmd4h"] Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.572092 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-utilities\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.572338 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-catalog-content\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.572620 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tct2f\" (UniqueName: \"kubernetes.io/projected/effbec3a-d9f3-442b-8323-f1efe45da6e7-kube-api-access-tct2f\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.675015 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-utilities\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.675162 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-catalog-content\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.675264 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tct2f\" (UniqueName: \"kubernetes.io/projected/effbec3a-d9f3-442b-8323-f1efe45da6e7-kube-api-access-tct2f\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.675664 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-utilities\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.675897 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-catalog-content\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.695621 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tct2f\" (UniqueName: \"kubernetes.io/projected/effbec3a-d9f3-442b-8323-f1efe45da6e7-kube-api-access-tct2f\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.857733 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:23 crc kubenswrapper[4985]: I0128 19:11:23.416194 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmd4h"] Jan 28 19:11:24 crc kubenswrapper[4985]: I0128 19:11:24.235382 4985 generic.go:334] "Generic (PLEG): container finished" podID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerID="2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab" exitCode=0 Jan 28 19:11:24 crc kubenswrapper[4985]: I0128 19:11:24.235465 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerDied","Data":"2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab"} Jan 28 19:11:24 crc kubenswrapper[4985]: I0128 19:11:24.235696 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerStarted","Data":"cd5483b11db8f03e88cd6505a04e2d29146345183abc44446dd962fee7ea0233"} Jan 28 19:11:26 crc kubenswrapper[4985]: I0128 19:11:26.262957 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerStarted","Data":"e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7"} Jan 28 19:11:29 crc kubenswrapper[4985]: I0128 19:11:29.299187 4985 generic.go:334] "Generic (PLEG): container finished" podID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerID="e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7" exitCode=0 Jan 28 19:11:29 crc kubenswrapper[4985]: I0128 19:11:29.299730 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerDied","Data":"e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7"} Jan 28 19:11:30 crc kubenswrapper[4985]: I0128 19:11:30.334030 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerStarted","Data":"86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a"} Jan 28 19:11:30 crc kubenswrapper[4985]: I0128 19:11:30.359160 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nmd4h" podStartSLOduration=2.579191108 podStartE2EDuration="8.359138847s" podCreationTimestamp="2026-01-28 19:11:22 +0000 UTC" firstStartedPulling="2026-01-28 19:11:24.238033082 +0000 UTC m=+3495.064595903" lastFinishedPulling="2026-01-28 19:11:30.017980821 +0000 UTC m=+3500.844543642" observedRunningTime="2026-01-28 19:11:30.356865053 +0000 UTC m=+3501.183427874" watchObservedRunningTime="2026-01-28 19:11:30.359138847 +0000 UTC m=+3501.185701668" Jan 28 19:11:32 crc kubenswrapper[4985]: I0128 19:11:32.859652 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:32 crc kubenswrapper[4985]: I0128 19:11:32.860025 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:32 crc kubenswrapper[4985]: I0128 19:11:32.917643 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:42 crc kubenswrapper[4985]: I0128 19:11:42.930720 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:43 crc kubenswrapper[4985]: I0128 19:11:43.016033 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmd4h"] Jan 28 19:11:43 crc kubenswrapper[4985]: I0128 19:11:43.506355 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nmd4h" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="registry-server" containerID="cri-o://86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a" gracePeriod=2 Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.005753 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.137670 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-utilities\") pod \"effbec3a-d9f3-442b-8323-f1efe45da6e7\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.137730 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-catalog-content\") pod \"effbec3a-d9f3-442b-8323-f1efe45da6e7\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.137801 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tct2f\" (UniqueName: \"kubernetes.io/projected/effbec3a-d9f3-442b-8323-f1efe45da6e7-kube-api-access-tct2f\") pod \"effbec3a-d9f3-442b-8323-f1efe45da6e7\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.138793 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-utilities" (OuterVolumeSpecName: "utilities") pod "effbec3a-d9f3-442b-8323-f1efe45da6e7" (UID: "effbec3a-d9f3-442b-8323-f1efe45da6e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.144609 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/effbec3a-d9f3-442b-8323-f1efe45da6e7-kube-api-access-tct2f" (OuterVolumeSpecName: "kube-api-access-tct2f") pod "effbec3a-d9f3-442b-8323-f1efe45da6e7" (UID: "effbec3a-d9f3-442b-8323-f1efe45da6e7"). InnerVolumeSpecName "kube-api-access-tct2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.190040 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "effbec3a-d9f3-442b-8323-f1efe45da6e7" (UID: "effbec3a-d9f3-442b-8323-f1efe45da6e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.240364 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.240397 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.240407 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tct2f\" (UniqueName: \"kubernetes.io/projected/effbec3a-d9f3-442b-8323-f1efe45da6e7-kube-api-access-tct2f\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.519722 4985 generic.go:334] "Generic (PLEG): container finished" podID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerID="86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a" exitCode=0 Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.519785 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.519804 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerDied","Data":"86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a"} Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.520325 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerDied","Data":"cd5483b11db8f03e88cd6505a04e2d29146345183abc44446dd962fee7ea0233"} Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.520352 4985 scope.go:117] "RemoveContainer" containerID="86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.545726 4985 scope.go:117] "RemoveContainer" containerID="e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.567103 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmd4h"] Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.578295 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nmd4h"] Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.585707 4985 scope.go:117] "RemoveContainer" containerID="2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.634930 4985 scope.go:117] "RemoveContainer" containerID="86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a" Jan 28 19:11:44 crc kubenswrapper[4985]: E0128 19:11:44.635342 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a\": container with ID starting with 86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a not found: ID does not exist" containerID="86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.635389 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a"} err="failed to get container status \"86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a\": rpc error: code = NotFound desc = could not find container \"86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a\": container with ID starting with 86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a not found: ID does not exist" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.635418 4985 scope.go:117] "RemoveContainer" containerID="e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7" Jan 28 19:11:44 crc kubenswrapper[4985]: E0128 19:11:44.635892 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7\": container with ID starting with e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7 not found: ID does not exist" containerID="e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.635924 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7"} err="failed to get container status \"e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7\": rpc error: code = NotFound desc = could not find container \"e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7\": container with ID starting with e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7 not found: ID does not exist" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.635941 4985 scope.go:117] "RemoveContainer" containerID="2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab" Jan 28 19:11:44 crc kubenswrapper[4985]: E0128 19:11:44.636122 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab\": container with ID starting with 2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab not found: ID does not exist" containerID="2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.636148 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab"} err="failed to get container status \"2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab\": rpc error: code = NotFound desc = could not find container \"2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab\": container with ID starting with 2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab not found: ID does not exist" Jan 28 19:11:45 crc kubenswrapper[4985]: I0128 19:11:45.299018 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" path="/var/lib/kubelet/pods/effbec3a-d9f3-442b-8323-f1efe45da6e7/volumes" Jan 28 19:12:41 crc kubenswrapper[4985]: I0128 19:12:41.185740 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:12:41 crc kubenswrapper[4985]: I0128 19:12:41.186199 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.186463 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.186987 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.957546 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tkzhb"] Jan 28 19:13:11 crc kubenswrapper[4985]: E0128 19:13:11.958479 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="extract-utilities" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.958498 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="extract-utilities" Jan 28 19:13:11 crc kubenswrapper[4985]: E0128 19:13:11.958523 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="registry-server" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.958531 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="registry-server" Jan 28 19:13:11 crc kubenswrapper[4985]: E0128 19:13:11.958556 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="extract-content" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.958568 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="extract-content" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.958843 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="registry-server" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.961318 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.976061 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkzhb"] Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.981134 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5tsj\" (UniqueName: \"kubernetes.io/projected/950fa11d-42de-4bd7-87b2-f660e063c57f-kube-api-access-w5tsj\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.981201 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-utilities\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.981295 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-catalog-content\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.084220 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-utilities\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.084414 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-catalog-content\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.084667 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5tsj\" (UniqueName: \"kubernetes.io/projected/950fa11d-42de-4bd7-87b2-f660e063c57f-kube-api-access-w5tsj\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.084792 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-utilities\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.084897 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-catalog-content\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.107281 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5tsj\" (UniqueName: \"kubernetes.io/projected/950fa11d-42de-4bd7-87b2-f660e063c57f-kube-api-access-w5tsj\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.307872 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.861924 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkzhb"] Jan 28 19:13:13 crc kubenswrapper[4985]: I0128 19:13:13.631003 4985 generic.go:334] "Generic (PLEG): container finished" podID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerID="e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca" exitCode=0 Jan 28 19:13:13 crc kubenswrapper[4985]: I0128 19:13:13.631297 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerDied","Data":"e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca"} Jan 28 19:13:13 crc kubenswrapper[4985]: I0128 19:13:13.631778 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerStarted","Data":"8886093d8e543f3fc13c31718f237c34c3af925dbaec60d5dddf203751ff3f82"} Jan 28 19:13:16 crc kubenswrapper[4985]: I0128 19:13:16.673338 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerStarted","Data":"21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116"} Jan 28 19:13:16 crc kubenswrapper[4985]: E0128 19:13:16.916304 4985 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.195:46748->38.102.83.195:43365: read tcp 38.102.83.195:46748->38.102.83.195:43365: read: connection reset by peer Jan 28 19:13:19 crc kubenswrapper[4985]: E0128 19:13:19.085960 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod950fa11d_42de_4bd7_87b2_f660e063c57f.slice/crio-conmon-21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod950fa11d_42de_4bd7_87b2_f660e063c57f.slice/crio-21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116.scope\": RecentStats: unable to find data in memory cache]" Jan 28 19:13:19 crc kubenswrapper[4985]: I0128 19:13:19.712835 4985 generic.go:334] "Generic (PLEG): container finished" podID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerID="21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116" exitCode=0 Jan 28 19:13:19 crc kubenswrapper[4985]: I0128 19:13:19.713186 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerDied","Data":"21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116"} Jan 28 19:13:20 crc kubenswrapper[4985]: I0128 19:13:20.729325 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerStarted","Data":"5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb"} Jan 28 19:13:20 crc kubenswrapper[4985]: I0128 19:13:20.758168 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tkzhb" podStartSLOduration=3.21449852 podStartE2EDuration="9.758143356s" podCreationTimestamp="2026-01-28 19:13:11 +0000 UTC" firstStartedPulling="2026-01-28 19:13:13.633696511 +0000 UTC m=+3604.460259332" lastFinishedPulling="2026-01-28 19:13:20.177341337 +0000 UTC m=+3611.003904168" observedRunningTime="2026-01-28 19:13:20.745931951 +0000 UTC m=+3611.572494782" watchObservedRunningTime="2026-01-28 19:13:20.758143356 +0000 UTC m=+3611.584706177" Jan 28 19:13:22 crc kubenswrapper[4985]: I0128 19:13:22.308485 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:22 crc kubenswrapper[4985]: I0128 19:13:22.309368 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:23 crc kubenswrapper[4985]: I0128 19:13:23.360058 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tkzhb" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="registry-server" probeResult="failure" output=< Jan 28 19:13:23 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:13:23 crc kubenswrapper[4985]: > Jan 28 19:13:32 crc kubenswrapper[4985]: I0128 19:13:32.360410 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:32 crc kubenswrapper[4985]: I0128 19:13:32.417591 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:32 crc kubenswrapper[4985]: I0128 19:13:32.605570 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkzhb"] Jan 28 19:13:33 crc kubenswrapper[4985]: I0128 19:13:33.898529 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tkzhb" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="registry-server" containerID="cri-o://5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb" gracePeriod=2 Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.431827 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.442264 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5tsj\" (UniqueName: \"kubernetes.io/projected/950fa11d-42de-4bd7-87b2-f660e063c57f-kube-api-access-w5tsj\") pod \"950fa11d-42de-4bd7-87b2-f660e063c57f\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.442459 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-utilities\") pod \"950fa11d-42de-4bd7-87b2-f660e063c57f\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.442503 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-catalog-content\") pod \"950fa11d-42de-4bd7-87b2-f660e063c57f\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.443158 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-utilities" (OuterVolumeSpecName: "utilities") pod "950fa11d-42de-4bd7-87b2-f660e063c57f" (UID: "950fa11d-42de-4bd7-87b2-f660e063c57f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.458216 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950fa11d-42de-4bd7-87b2-f660e063c57f-kube-api-access-w5tsj" (OuterVolumeSpecName: "kube-api-access-w5tsj") pod "950fa11d-42de-4bd7-87b2-f660e063c57f" (UID: "950fa11d-42de-4bd7-87b2-f660e063c57f"). InnerVolumeSpecName "kube-api-access-w5tsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.517934 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "950fa11d-42de-4bd7-87b2-f660e063c57f" (UID: "950fa11d-42de-4bd7-87b2-f660e063c57f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.545287 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.545330 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.545346 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5tsj\" (UniqueName: \"kubernetes.io/projected/950fa11d-42de-4bd7-87b2-f660e063c57f-kube-api-access-w5tsj\") on node \"crc\" DevicePath \"\"" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.911768 4985 generic.go:334] "Generic (PLEG): container finished" podID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerID="5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb" exitCode=0 Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.911847 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.911868 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerDied","Data":"5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb"} Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.912200 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerDied","Data":"8886093d8e543f3fc13c31718f237c34c3af925dbaec60d5dddf203751ff3f82"} Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.912218 4985 scope.go:117] "RemoveContainer" containerID="5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.932679 4985 scope.go:117] "RemoveContainer" containerID="21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.967261 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkzhb"] Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.972075 4985 scope.go:117] "RemoveContainer" containerID="e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.979943 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tkzhb"] Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.022698 4985 scope.go:117] "RemoveContainer" containerID="5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb" Jan 28 19:13:35 crc kubenswrapper[4985]: E0128 19:13:35.023329 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb\": container with ID starting with 5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb not found: ID does not exist" containerID="5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb" Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.023456 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb"} err="failed to get container status \"5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb\": rpc error: code = NotFound desc = could not find container \"5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb\": container with ID starting with 5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb not found: ID does not exist" Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.023558 4985 scope.go:117] "RemoveContainer" containerID="21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116" Jan 28 19:13:35 crc kubenswrapper[4985]: E0128 19:13:35.024044 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116\": container with ID starting with 21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116 not found: ID does not exist" containerID="21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116" Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.024117 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116"} err="failed to get container status \"21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116\": rpc error: code = NotFound desc = could not find container \"21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116\": container with ID starting with 21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116 not found: ID does not exist" Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.024147 4985 scope.go:117] "RemoveContainer" containerID="e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca" Jan 28 19:13:35 crc kubenswrapper[4985]: E0128 19:13:35.024507 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca\": container with ID starting with e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca not found: ID does not exist" containerID="e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca" Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.024628 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca"} err="failed to get container status \"e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca\": rpc error: code = NotFound desc = could not find container \"e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca\": container with ID starting with e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca not found: ID does not exist" Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.278682 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" path="/var/lib/kubelet/pods/950fa11d-42de-4bd7-87b2-f660e063c57f/volumes" Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.185784 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.186296 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.186351 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.187167 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.187223 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" gracePeriod=600 Jan 28 19:13:41 crc kubenswrapper[4985]: E0128 19:13:41.321999 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.988392 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" exitCode=0 Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.988437 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf"} Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.988469 4985 scope.go:117] "RemoveContainer" containerID="a627b2b579e569c0b043d2fecf15b4dfaeb3f01422dbeb527c4e889676ab53e6" Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.989261 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:13:41 crc kubenswrapper[4985]: E0128 19:13:41.989624 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:13:57 crc kubenswrapper[4985]: I0128 19:13:57.264588 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:13:57 crc kubenswrapper[4985]: E0128 19:13:57.265388 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:14:09 crc kubenswrapper[4985]: I0128 19:14:09.267126 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:14:09 crc kubenswrapper[4985]: E0128 19:14:09.267879 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:14:21 crc kubenswrapper[4985]: I0128 19:14:21.273153 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:14:21 crc kubenswrapper[4985]: E0128 19:14:21.274391 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:14:32 crc kubenswrapper[4985]: I0128 19:14:32.264566 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:14:32 crc kubenswrapper[4985]: E0128 19:14:32.265468 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:14:46 crc kubenswrapper[4985]: I0128 19:14:46.264397 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:14:46 crc kubenswrapper[4985]: E0128 19:14:46.265114 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.177959 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7"] Jan 28 19:15:00 crc kubenswrapper[4985]: E0128 19:15:00.181627 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="extract-utilities" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.181880 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="extract-utilities" Jan 28 19:15:00 crc kubenswrapper[4985]: E0128 19:15:00.181983 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="registry-server" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.182067 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="registry-server" Jan 28 19:15:00 crc kubenswrapper[4985]: E0128 19:15:00.182206 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="extract-content" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.182315 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="extract-content" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.182885 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="registry-server" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.184338 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.188314 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.188931 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.190008 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7"] Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.265738 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:15:00 crc kubenswrapper[4985]: E0128 19:15:00.267443 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.301193 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc7f7054-2ff2-4045-aa35-4345b449dc70-config-volume\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.301277 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc7f7054-2ff2-4045-aa35-4345b449dc70-secret-volume\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.301612 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k75l9\" (UniqueName: \"kubernetes.io/projected/dc7f7054-2ff2-4045-aa35-4345b449dc70-kube-api-access-k75l9\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.404403 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc7f7054-2ff2-4045-aa35-4345b449dc70-config-volume\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.404473 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc7f7054-2ff2-4045-aa35-4345b449dc70-secret-volume\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.404593 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k75l9\" (UniqueName: \"kubernetes.io/projected/dc7f7054-2ff2-4045-aa35-4345b449dc70-kube-api-access-k75l9\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.406211 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc7f7054-2ff2-4045-aa35-4345b449dc70-config-volume\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.415456 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc7f7054-2ff2-4045-aa35-4345b449dc70-secret-volume\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.422078 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k75l9\" (UniqueName: \"kubernetes.io/projected/dc7f7054-2ff2-4045-aa35-4345b449dc70-kube-api-access-k75l9\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.520330 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:01 crc kubenswrapper[4985]: I0128 19:15:01.010117 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7"] Jan 28 19:15:01 crc kubenswrapper[4985]: I0128 19:15:01.916704 4985 generic.go:334] "Generic (PLEG): container finished" podID="dc7f7054-2ff2-4045-aa35-4345b449dc70" containerID="338f8d06b8e77092f3ed49ded314fa263d3bc00689eede0c01a39e28fc35ddd0" exitCode=0 Jan 28 19:15:01 crc kubenswrapper[4985]: I0128 19:15:01.916807 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" event={"ID":"dc7f7054-2ff2-4045-aa35-4345b449dc70","Type":"ContainerDied","Data":"338f8d06b8e77092f3ed49ded314fa263d3bc00689eede0c01a39e28fc35ddd0"} Jan 28 19:15:01 crc kubenswrapper[4985]: I0128 19:15:01.917200 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" event={"ID":"dc7f7054-2ff2-4045-aa35-4345b449dc70","Type":"ContainerStarted","Data":"ea047508cb623d2e90c208409d5cd0ff3b3af32c8bb319c49b3ee7fa83da9fe0"} Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.383468 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.486514 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc7f7054-2ff2-4045-aa35-4345b449dc70-config-volume\") pod \"dc7f7054-2ff2-4045-aa35-4345b449dc70\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.486629 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k75l9\" (UniqueName: \"kubernetes.io/projected/dc7f7054-2ff2-4045-aa35-4345b449dc70-kube-api-access-k75l9\") pod \"dc7f7054-2ff2-4045-aa35-4345b449dc70\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.486809 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc7f7054-2ff2-4045-aa35-4345b449dc70-secret-volume\") pod \"dc7f7054-2ff2-4045-aa35-4345b449dc70\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.494395 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7f7054-2ff2-4045-aa35-4345b449dc70-kube-api-access-k75l9" (OuterVolumeSpecName: "kube-api-access-k75l9") pod "dc7f7054-2ff2-4045-aa35-4345b449dc70" (UID: "dc7f7054-2ff2-4045-aa35-4345b449dc70"). InnerVolumeSpecName "kube-api-access-k75l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.494433 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7f7054-2ff2-4045-aa35-4345b449dc70-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dc7f7054-2ff2-4045-aa35-4345b449dc70" (UID: "dc7f7054-2ff2-4045-aa35-4345b449dc70"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.495643 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc7f7054-2ff2-4045-aa35-4345b449dc70-config-volume" (OuterVolumeSpecName: "config-volume") pod "dc7f7054-2ff2-4045-aa35-4345b449dc70" (UID: "dc7f7054-2ff2-4045-aa35-4345b449dc70"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.589232 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k75l9\" (UniqueName: \"kubernetes.io/projected/dc7f7054-2ff2-4045-aa35-4345b449dc70-kube-api-access-k75l9\") on node \"crc\" DevicePath \"\"" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.589271 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc7f7054-2ff2-4045-aa35-4345b449dc70-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.589280 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc7f7054-2ff2-4045-aa35-4345b449dc70-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.946425 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" event={"ID":"dc7f7054-2ff2-4045-aa35-4345b449dc70","Type":"ContainerDied","Data":"ea047508cb623d2e90c208409d5cd0ff3b3af32c8bb319c49b3ee7fa83da9fe0"} Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.947187 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea047508cb623d2e90c208409d5cd0ff3b3af32c8bb319c49b3ee7fa83da9fe0" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.946815 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:04 crc kubenswrapper[4985]: I0128 19:15:04.464873 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm"] Jan 28 19:15:04 crc kubenswrapper[4985]: I0128 19:15:04.475617 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm"] Jan 28 19:15:05 crc kubenswrapper[4985]: I0128 19:15:05.282892 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfca2781-d8d0-4e7e-85c8-d337780059ae" path="/var/lib/kubelet/pods/dfca2781-d8d0-4e7e-85c8-d337780059ae/volumes" Jan 28 19:15:12 crc kubenswrapper[4985]: I0128 19:15:12.263990 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:15:12 crc kubenswrapper[4985]: E0128 19:15:12.264912 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:15:27 crc kubenswrapper[4985]: I0128 19:15:27.265229 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:15:27 crc kubenswrapper[4985]: E0128 19:15:27.266672 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:15:38 crc kubenswrapper[4985]: I0128 19:15:38.264813 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:15:38 crc kubenswrapper[4985]: E0128 19:15:38.265643 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:15:42 crc kubenswrapper[4985]: I0128 19:15:42.264093 4985 scope.go:117] "RemoveContainer" containerID="0f1e952a6fa49b7083594207d25422769b2776c1aec196aa97dc536dd6123d3e" Jan 28 19:15:53 crc kubenswrapper[4985]: I0128 19:15:53.264326 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:15:53 crc kubenswrapper[4985]: E0128 19:15:53.265719 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:16:06 crc kubenswrapper[4985]: I0128 19:16:06.264880 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:16:06 crc kubenswrapper[4985]: E0128 19:16:06.266327 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:16:21 crc kubenswrapper[4985]: I0128 19:16:21.273584 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:16:21 crc kubenswrapper[4985]: E0128 19:16:21.274583 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:16:34 crc kubenswrapper[4985]: I0128 19:16:34.264092 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:16:34 crc kubenswrapper[4985]: E0128 19:16:34.264972 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:16:48 crc kubenswrapper[4985]: I0128 19:16:48.263910 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:16:48 crc kubenswrapper[4985]: E0128 19:16:48.264832 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:16:59 crc kubenswrapper[4985]: I0128 19:16:59.264858 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:16:59 crc kubenswrapper[4985]: E0128 19:16:59.265786 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:17:14 crc kubenswrapper[4985]: I0128 19:17:14.264367 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:17:14 crc kubenswrapper[4985]: E0128 19:17:14.265165 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:17:28 crc kubenswrapper[4985]: I0128 19:17:28.264418 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:17:28 crc kubenswrapper[4985]: E0128 19:17:28.265194 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:17:43 crc kubenswrapper[4985]: I0128 19:17:43.264308 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:17:43 crc kubenswrapper[4985]: E0128 19:17:43.265124 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:17:54 crc kubenswrapper[4985]: I0128 19:17:54.264307 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:17:54 crc kubenswrapper[4985]: E0128 19:17:54.265053 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:18:08 crc kubenswrapper[4985]: I0128 19:18:08.264669 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:18:08 crc kubenswrapper[4985]: E0128 19:18:08.265495 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:18:21 crc kubenswrapper[4985]: I0128 19:18:21.272425 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:18:21 crc kubenswrapper[4985]: E0128 19:18:21.273245 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.264031 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:18:35 crc kubenswrapper[4985]: E0128 19:18:35.264803 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.542629 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z9f59"] Jan 28 19:18:35 crc kubenswrapper[4985]: E0128 19:18:35.543697 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7f7054-2ff2-4045-aa35-4345b449dc70" containerName="collect-profiles" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.543720 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7f7054-2ff2-4045-aa35-4345b449dc70" containerName="collect-profiles" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.546307 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7f7054-2ff2-4045-aa35-4345b449dc70" containerName="collect-profiles" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.583899 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.597963 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9f59"] Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.740503 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlr2f\" (UniqueName: \"kubernetes.io/projected/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-kube-api-access-xlr2f\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.740589 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-catalog-content\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.740813 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-utilities\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.843484 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlr2f\" (UniqueName: \"kubernetes.io/projected/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-kube-api-access-xlr2f\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.843555 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-catalog-content\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.843748 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-utilities\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.844421 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-utilities\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.844601 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-catalog-content\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.869099 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlr2f\" (UniqueName: \"kubernetes.io/projected/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-kube-api-access-xlr2f\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.916084 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:36 crc kubenswrapper[4985]: I0128 19:18:36.447483 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9f59"] Jan 28 19:18:37 crc kubenswrapper[4985]: I0128 19:18:37.455757 4985 generic.go:334] "Generic (PLEG): container finished" podID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerID="bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f" exitCode=0 Jan 28 19:18:37 crc kubenswrapper[4985]: I0128 19:18:37.456132 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerDied","Data":"bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f"} Jan 28 19:18:37 crc kubenswrapper[4985]: I0128 19:18:37.456165 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerStarted","Data":"355bd54575836eb89434d5f80445367bca9f1cbab148609bff229841432e69de"} Jan 28 19:18:37 crc kubenswrapper[4985]: I0128 19:18:37.459132 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:18:38 crc kubenswrapper[4985]: I0128 19:18:38.468179 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerStarted","Data":"bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4"} Jan 28 19:18:45 crc kubenswrapper[4985]: E0128 19:18:45.772884 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29d3c5bf_f955_4498_a72d_b71b0bb65d6e.slice/crio-conmon-bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4.scope\": RecentStats: unable to find data in memory cache]" Jan 28 19:18:46 crc kubenswrapper[4985]: I0128 19:18:46.576396 4985 generic.go:334] "Generic (PLEG): container finished" podID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerID="bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4" exitCode=0 Jan 28 19:18:46 crc kubenswrapper[4985]: I0128 19:18:46.576762 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerDied","Data":"bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4"} Jan 28 19:18:47 crc kubenswrapper[4985]: I0128 19:18:47.265157 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:18:48 crc kubenswrapper[4985]: I0128 19:18:48.599337 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerStarted","Data":"956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff"} Jan 28 19:18:48 crc kubenswrapper[4985]: I0128 19:18:48.608702 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"4009eafc6fc98f5bba47d16fef1bdf99ca37bd45a3ef67b66f8ba8cec4bf0f59"} Jan 28 19:18:48 crc kubenswrapper[4985]: I0128 19:18:48.629988 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z9f59" podStartSLOduration=4.068388264 podStartE2EDuration="13.629968909s" podCreationTimestamp="2026-01-28 19:18:35 +0000 UTC" firstStartedPulling="2026-01-28 19:18:37.458844556 +0000 UTC m=+3928.285407387" lastFinishedPulling="2026-01-28 19:18:47.020425211 +0000 UTC m=+3937.846988032" observedRunningTime="2026-01-28 19:18:48.622305482 +0000 UTC m=+3939.448868313" watchObservedRunningTime="2026-01-28 19:18:48.629968909 +0000 UTC m=+3939.456531720" Jan 28 19:18:55 crc kubenswrapper[4985]: I0128 19:18:55.916551 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:55 crc kubenswrapper[4985]: I0128 19:18:55.917067 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:56 crc kubenswrapper[4985]: I0128 19:18:56.974085 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9f59" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="registry-server" probeResult="failure" output=< Jan 28 19:18:56 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:18:56 crc kubenswrapper[4985]: > Jan 28 19:19:07 crc kubenswrapper[4985]: I0128 19:19:07.263269 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9f59" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="registry-server" probeResult="failure" output=< Jan 28 19:19:07 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:19:07 crc kubenswrapper[4985]: > Jan 28 19:19:15 crc kubenswrapper[4985]: I0128 19:19:15.965988 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:19:16 crc kubenswrapper[4985]: I0128 19:19:16.014751 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:19:16 crc kubenswrapper[4985]: I0128 19:19:16.207866 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9f59"] Jan 28 19:19:17 crc kubenswrapper[4985]: I0128 19:19:17.930274 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z9f59" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="registry-server" containerID="cri-o://956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff" gracePeriod=2 Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.585532 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.779497 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-catalog-content\") pod \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.789577 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlr2f\" (UniqueName: \"kubernetes.io/projected/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-kube-api-access-xlr2f\") pod \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.789638 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-utilities\") pod \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.790562 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-utilities" (OuterVolumeSpecName: "utilities") pod "29d3c5bf-f955-4498-a72d-b71b0bb65d6e" (UID: "29d3c5bf-f955-4498-a72d-b71b0bb65d6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.797700 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-kube-api-access-xlr2f" (OuterVolumeSpecName: "kube-api-access-xlr2f") pod "29d3c5bf-f955-4498-a72d-b71b0bb65d6e" (UID: "29d3c5bf-f955-4498-a72d-b71b0bb65d6e"). InnerVolumeSpecName "kube-api-access-xlr2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.893006 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlr2f\" (UniqueName: \"kubernetes.io/projected/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-kube-api-access-xlr2f\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.893048 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.899370 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29d3c5bf-f955-4498-a72d-b71b0bb65d6e" (UID: "29d3c5bf-f955-4498-a72d-b71b0bb65d6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.948702 4985 generic.go:334] "Generic (PLEG): container finished" podID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerID="956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff" exitCode=0 Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.948774 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerDied","Data":"956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff"} Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.948807 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerDied","Data":"355bd54575836eb89434d5f80445367bca9f1cbab148609bff229841432e69de"} Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.948826 4985 scope.go:117] "RemoveContainer" containerID="956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.949028 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.986508 4985 scope.go:117] "RemoveContainer" containerID="bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.996350 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.000187 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9f59"] Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.012328 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z9f59"] Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.025559 4985 scope.go:117] "RemoveContainer" containerID="bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.119493 4985 scope.go:117] "RemoveContainer" containerID="956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff" Jan 28 19:19:19 crc kubenswrapper[4985]: E0128 19:19:19.120354 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff\": container with ID starting with 956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff not found: ID does not exist" containerID="956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.120407 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff"} err="failed to get container status \"956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff\": rpc error: code = NotFound desc = could not find container \"956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff\": container with ID starting with 956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff not found: ID does not exist" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.120440 4985 scope.go:117] "RemoveContainer" containerID="bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4" Jan 28 19:19:19 crc kubenswrapper[4985]: E0128 19:19:19.121296 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4\": container with ID starting with bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4 not found: ID does not exist" containerID="bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.121337 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4"} err="failed to get container status \"bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4\": rpc error: code = NotFound desc = could not find container \"bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4\": container with ID starting with bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4 not found: ID does not exist" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.121357 4985 scope.go:117] "RemoveContainer" containerID="bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f" Jan 28 19:19:19 crc kubenswrapper[4985]: E0128 19:19:19.122773 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f\": container with ID starting with bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f not found: ID does not exist" containerID="bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.122820 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f"} err="failed to get container status \"bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f\": rpc error: code = NotFound desc = could not find container \"bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f\": container with ID starting with bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f not found: ID does not exist" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.281046 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" path="/var/lib/kubelet/pods/29d3c5bf-f955-4498-a72d-b71b0bb65d6e/volumes" Jan 28 19:19:56 crc kubenswrapper[4985]: E0128 19:19:56.442477 4985 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.195:55024->38.102.83.195:43365: write tcp 38.102.83.195:55024->38.102.83.195:43365: write: broken pipe Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.300794 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7j52l"] Jan 28 19:21:04 crc kubenswrapper[4985]: E0128 19:21:04.301836 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="extract-utilities" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.301850 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="extract-utilities" Jan 28 19:21:04 crc kubenswrapper[4985]: E0128 19:21:04.301865 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="extract-content" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.301871 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="extract-content" Jan 28 19:21:04 crc kubenswrapper[4985]: E0128 19:21:04.301901 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="registry-server" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.301909 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="registry-server" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.302165 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="registry-server" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.303999 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.306200 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-utilities\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.306238 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-catalog-content\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.306289 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jlll\" (UniqueName: \"kubernetes.io/projected/af1fd134-bd28-4422-88b4-27f389229481-kube-api-access-7jlll\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.321728 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j52l"] Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.408896 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-utilities\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.409018 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-catalog-content\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.409057 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jlll\" (UniqueName: \"kubernetes.io/projected/af1fd134-bd28-4422-88b4-27f389229481-kube-api-access-7jlll\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.409659 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-utilities\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.409680 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-catalog-content\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.437968 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jlll\" (UniqueName: \"kubernetes.io/projected/af1fd134-bd28-4422-88b4-27f389229481-kube-api-access-7jlll\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.675701 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:05 crc kubenswrapper[4985]: I0128 19:21:05.339868 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j52l"] Jan 28 19:21:06 crc kubenswrapper[4985]: I0128 19:21:06.158340 4985 generic.go:334] "Generic (PLEG): container finished" podID="af1fd134-bd28-4422-88b4-27f389229481" containerID="ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e" exitCode=0 Jan 28 19:21:06 crc kubenswrapper[4985]: I0128 19:21:06.158545 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerDied","Data":"ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e"} Jan 28 19:21:06 crc kubenswrapper[4985]: I0128 19:21:06.158965 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerStarted","Data":"bc817422166edcc0a6ae8557035a413653c4ac3ad6d4d9093ca8973bcee53f57"} Jan 28 19:21:08 crc kubenswrapper[4985]: I0128 19:21:08.193834 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerStarted","Data":"9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9"} Jan 28 19:21:09 crc kubenswrapper[4985]: I0128 19:21:09.205736 4985 generic.go:334] "Generic (PLEG): container finished" podID="af1fd134-bd28-4422-88b4-27f389229481" containerID="9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9" exitCode=0 Jan 28 19:21:09 crc kubenswrapper[4985]: I0128 19:21:09.205823 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerDied","Data":"9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9"} Jan 28 19:21:10 crc kubenswrapper[4985]: I0128 19:21:10.219878 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerStarted","Data":"729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99"} Jan 28 19:21:10 crc kubenswrapper[4985]: I0128 19:21:10.249629 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7j52l" podStartSLOduration=2.7222573089999997 podStartE2EDuration="6.24960471s" podCreationTimestamp="2026-01-28 19:21:04 +0000 UTC" firstStartedPulling="2026-01-28 19:21:06.163235452 +0000 UTC m=+4076.989798273" lastFinishedPulling="2026-01-28 19:21:09.690582853 +0000 UTC m=+4080.517145674" observedRunningTime="2026-01-28 19:21:10.243005273 +0000 UTC m=+4081.069568094" watchObservedRunningTime="2026-01-28 19:21:10.24960471 +0000 UTC m=+4081.076167541" Jan 28 19:21:11 crc kubenswrapper[4985]: I0128 19:21:11.186409 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:21:11 crc kubenswrapper[4985]: I0128 19:21:11.186483 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:21:14 crc kubenswrapper[4985]: I0128 19:21:14.676384 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:14 crc kubenswrapper[4985]: I0128 19:21:14.676735 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:14 crc kubenswrapper[4985]: I0128 19:21:14.736529 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:15 crc kubenswrapper[4985]: I0128 19:21:15.348971 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:15 crc kubenswrapper[4985]: I0128 19:21:15.409034 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j52l"] Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.300819 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7j52l" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="registry-server" containerID="cri-o://729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99" gracePeriod=2 Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.927369 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.976425 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jlll\" (UniqueName: \"kubernetes.io/projected/af1fd134-bd28-4422-88b4-27f389229481-kube-api-access-7jlll\") pod \"af1fd134-bd28-4422-88b4-27f389229481\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.976555 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-utilities\") pod \"af1fd134-bd28-4422-88b4-27f389229481\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.976802 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-catalog-content\") pod \"af1fd134-bd28-4422-88b4-27f389229481\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.978305 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-utilities" (OuterVolumeSpecName: "utilities") pod "af1fd134-bd28-4422-88b4-27f389229481" (UID: "af1fd134-bd28-4422-88b4-27f389229481"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.986390 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.990983 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af1fd134-bd28-4422-88b4-27f389229481-kube-api-access-7jlll" (OuterVolumeSpecName: "kube-api-access-7jlll") pod "af1fd134-bd28-4422-88b4-27f389229481" (UID: "af1fd134-bd28-4422-88b4-27f389229481"). InnerVolumeSpecName "kube-api-access-7jlll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.005387 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af1fd134-bd28-4422-88b4-27f389229481" (UID: "af1fd134-bd28-4422-88b4-27f389229481"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.088009 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jlll\" (UniqueName: \"kubernetes.io/projected/af1fd134-bd28-4422-88b4-27f389229481-kube-api-access-7jlll\") on node \"crc\" DevicePath \"\"" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.088050 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.316651 4985 generic.go:334] "Generic (PLEG): container finished" podID="af1fd134-bd28-4422-88b4-27f389229481" containerID="729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99" exitCode=0 Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.316715 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerDied","Data":"729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99"} Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.316729 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.316763 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerDied","Data":"bc817422166edcc0a6ae8557035a413653c4ac3ad6d4d9093ca8973bcee53f57"} Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.316790 4985 scope.go:117] "RemoveContainer" containerID="729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.346103 4985 scope.go:117] "RemoveContainer" containerID="9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.369331 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j52l"] Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.380024 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j52l"] Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.382656 4985 scope.go:117] "RemoveContainer" containerID="ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.425381 4985 scope.go:117] "RemoveContainer" containerID="729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99" Jan 28 19:21:18 crc kubenswrapper[4985]: E0128 19:21:18.425863 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99\": container with ID starting with 729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99 not found: ID does not exist" containerID="729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.425921 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99"} err="failed to get container status \"729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99\": rpc error: code = NotFound desc = could not find container \"729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99\": container with ID starting with 729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99 not found: ID does not exist" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.425957 4985 scope.go:117] "RemoveContainer" containerID="9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9" Jan 28 19:21:18 crc kubenswrapper[4985]: E0128 19:21:18.426368 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9\": container with ID starting with 9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9 not found: ID does not exist" containerID="9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.426407 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9"} err="failed to get container status \"9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9\": rpc error: code = NotFound desc = could not find container \"9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9\": container with ID starting with 9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9 not found: ID does not exist" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.426434 4985 scope.go:117] "RemoveContainer" containerID="ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e" Jan 28 19:21:18 crc kubenswrapper[4985]: E0128 19:21:18.426824 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e\": container with ID starting with ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e not found: ID does not exist" containerID="ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.426844 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e"} err="failed to get container status \"ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e\": rpc error: code = NotFound desc = could not find container \"ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e\": container with ID starting with ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e not found: ID does not exist" Jan 28 19:21:19 crc kubenswrapper[4985]: I0128 19:21:19.290856 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af1fd134-bd28-4422-88b4-27f389229481" path="/var/lib/kubelet/pods/af1fd134-bd28-4422-88b4-27f389229481/volumes" Jan 28 19:21:41 crc kubenswrapper[4985]: I0128 19:21:41.186531 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:21:41 crc kubenswrapper[4985]: I0128 19:21:41.187143 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:22:11 crc kubenswrapper[4985]: I0128 19:22:11.185848 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:22:11 crc kubenswrapper[4985]: I0128 19:22:11.186420 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:22:11 crc kubenswrapper[4985]: I0128 19:22:11.186475 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:22:11 crc kubenswrapper[4985]: I0128 19:22:11.187424 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4009eafc6fc98f5bba47d16fef1bdf99ca37bd45a3ef67b66f8ba8cec4bf0f59"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:22:11 crc kubenswrapper[4985]: I0128 19:22:11.187506 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://4009eafc6fc98f5bba47d16fef1bdf99ca37bd45a3ef67b66f8ba8cec4bf0f59" gracePeriod=600 Jan 28 19:22:12 crc kubenswrapper[4985]: I0128 19:22:12.190897 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="4009eafc6fc98f5bba47d16fef1bdf99ca37bd45a3ef67b66f8ba8cec4bf0f59" exitCode=0 Jan 28 19:22:12 crc kubenswrapper[4985]: I0128 19:22:12.190939 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"4009eafc6fc98f5bba47d16fef1bdf99ca37bd45a3ef67b66f8ba8cec4bf0f59"} Jan 28 19:22:12 crc kubenswrapper[4985]: I0128 19:22:12.191473 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680"} Jan 28 19:22:12 crc kubenswrapper[4985]: I0128 19:22:12.191495 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:24:11 crc kubenswrapper[4985]: I0128 19:24:11.186112 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:24:11 crc kubenswrapper[4985]: I0128 19:24:11.186666 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:24:41 crc kubenswrapper[4985]: I0128 19:24:41.186110 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:24:41 crc kubenswrapper[4985]: I0128 19:24:41.186707 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:25:11 crc kubenswrapper[4985]: I0128 19:25:11.186323 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:25:11 crc kubenswrapper[4985]: I0128 19:25:11.187123 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:25:11 crc kubenswrapper[4985]: I0128 19:25:11.187190 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:25:11 crc kubenswrapper[4985]: I0128 19:25:11.191377 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:25:11 crc kubenswrapper[4985]: I0128 19:25:11.191540 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" gracePeriod=600 Jan 28 19:25:11 crc kubenswrapper[4985]: E0128 19:25:11.330953 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:25:12 crc kubenswrapper[4985]: I0128 19:25:12.232169 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" exitCode=0 Jan 28 19:25:12 crc kubenswrapper[4985]: I0128 19:25:12.232333 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680"} Jan 28 19:25:12 crc kubenswrapper[4985]: I0128 19:25:12.232532 4985 scope.go:117] "RemoveContainer" containerID="4009eafc6fc98f5bba47d16fef1bdf99ca37bd45a3ef67b66f8ba8cec4bf0f59" Jan 28 19:25:12 crc kubenswrapper[4985]: I0128 19:25:12.233457 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:25:12 crc kubenswrapper[4985]: E0128 19:25:12.233993 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:25:27 crc kubenswrapper[4985]: I0128 19:25:27.264781 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:25:27 crc kubenswrapper[4985]: E0128 19:25:27.265951 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:25:41 crc kubenswrapper[4985]: I0128 19:25:41.271971 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:25:41 crc kubenswrapper[4985]: E0128 19:25:41.273922 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:25:55 crc kubenswrapper[4985]: I0128 19:25:55.264387 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:25:55 crc kubenswrapper[4985]: E0128 19:25:55.265136 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:26:10 crc kubenswrapper[4985]: I0128 19:26:10.264464 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:26:10 crc kubenswrapper[4985]: E0128 19:26:10.265870 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:26:24 crc kubenswrapper[4985]: I0128 19:26:24.264629 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:26:24 crc kubenswrapper[4985]: E0128 19:26:24.265369 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:26:37 crc kubenswrapper[4985]: I0128 19:26:37.264621 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:26:37 crc kubenswrapper[4985]: E0128 19:26:37.265655 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:26:49 crc kubenswrapper[4985]: I0128 19:26:49.263952 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:26:49 crc kubenswrapper[4985]: E0128 19:26:49.264911 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:27:00 crc kubenswrapper[4985]: I0128 19:27:00.264341 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:27:00 crc kubenswrapper[4985]: E0128 19:27:00.265296 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:27:11 crc kubenswrapper[4985]: I0128 19:27:11.271366 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:27:11 crc kubenswrapper[4985]: E0128 19:27:11.272317 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:27:24 crc kubenswrapper[4985]: I0128 19:27:24.264510 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:27:24 crc kubenswrapper[4985]: E0128 19:27:24.265199 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:27:39 crc kubenswrapper[4985]: I0128 19:27:39.263941 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:27:39 crc kubenswrapper[4985]: E0128 19:27:39.264690 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:27:52 crc kubenswrapper[4985]: I0128 19:27:52.266009 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:27:52 crc kubenswrapper[4985]: E0128 19:27:52.266903 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.279968 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g5d6k"] Jan 28 19:28:05 crc kubenswrapper[4985]: E0128 19:28:05.281026 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="registry-server" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.281046 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="registry-server" Jan 28 19:28:05 crc kubenswrapper[4985]: E0128 19:28:05.281082 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="extract-utilities" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.281090 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="extract-utilities" Jan 28 19:28:05 crc kubenswrapper[4985]: E0128 19:28:05.281111 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="extract-content" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.281119 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="extract-content" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.281403 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="registry-server" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.283500 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g5d6k"] Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.283628 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.364580 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-catalog-content\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.368524 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65w6x\" (UniqueName: \"kubernetes.io/projected/7bd660cc-bac3-40a2-baf1-d27477b66355-kube-api-access-65w6x\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.368657 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-utilities\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.471455 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65w6x\" (UniqueName: \"kubernetes.io/projected/7bd660cc-bac3-40a2-baf1-d27477b66355-kube-api-access-65w6x\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.471521 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-utilities\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.471578 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-catalog-content\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.472094 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-utilities\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.472191 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-catalog-content\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.931039 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65w6x\" (UniqueName: \"kubernetes.io/projected/7bd660cc-bac3-40a2-baf1-d27477b66355-kube-api-access-65w6x\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:06 crc kubenswrapper[4985]: I0128 19:28:06.220299 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:06 crc kubenswrapper[4985]: I0128 19:28:06.265533 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:28:06 crc kubenswrapper[4985]: E0128 19:28:06.265768 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:28:06 crc kubenswrapper[4985]: I0128 19:28:06.747300 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g5d6k"] Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.049430 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dwkk7"] Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.053861 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.074718 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwkk7"] Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.113420 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-utilities\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.113524 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx6ks\" (UniqueName: \"kubernetes.io/projected/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-kube-api-access-vx6ks\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.113640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-catalog-content\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.215549 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-utilities\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.215633 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx6ks\" (UniqueName: \"kubernetes.io/projected/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-kube-api-access-vx6ks\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.215714 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-catalog-content\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.216162 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-utilities\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.216186 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-catalog-content\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.237042 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx6ks\" (UniqueName: \"kubernetes.io/projected/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-kube-api-access-vx6ks\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.364415 4985 generic.go:334] "Generic (PLEG): container finished" podID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerID="69d4e05fa8611628adda8b6890905569708e909b85dd0cae338b974b7963ab20" exitCode=0 Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.364478 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerDied","Data":"69d4e05fa8611628adda8b6890905569708e909b85dd0cae338b974b7963ab20"} Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.364515 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerStarted","Data":"f995d9e0fe7cc52e4e2477b23584afbe7acdcdaaff398007005dc0deaba49a75"} Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.367172 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.430029 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:08 crc kubenswrapper[4985]: I0128 19:28:08.381770 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerStarted","Data":"4152802d09478a45d44a174e418e640afbf94234635886a9d8d380306df85929"} Jan 28 19:28:08 crc kubenswrapper[4985]: I0128 19:28:08.524109 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwkk7"] Jan 28 19:28:09 crc kubenswrapper[4985]: I0128 19:28:09.396403 4985 generic.go:334] "Generic (PLEG): container finished" podID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerID="f7e71cc3aa266e86642df0368ccd0be0c9024e06e8dd76ed47af29f9b0389fba" exitCode=0 Jan 28 19:28:09 crc kubenswrapper[4985]: I0128 19:28:09.396516 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerDied","Data":"f7e71cc3aa266e86642df0368ccd0be0c9024e06e8dd76ed47af29f9b0389fba"} Jan 28 19:28:09 crc kubenswrapper[4985]: I0128 19:28:09.396610 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerStarted","Data":"5d8d8e16e03ffc2f078f992a22dea1222e612d0595de642ee60d2ae1e024af47"} Jan 28 19:28:10 crc kubenswrapper[4985]: I0128 19:28:10.411235 4985 generic.go:334] "Generic (PLEG): container finished" podID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerID="4152802d09478a45d44a174e418e640afbf94234635886a9d8d380306df85929" exitCode=0 Jan 28 19:28:10 crc kubenswrapper[4985]: I0128 19:28:10.411290 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerDied","Data":"4152802d09478a45d44a174e418e640afbf94234635886a9d8d380306df85929"} Jan 28 19:28:11 crc kubenswrapper[4985]: I0128 19:28:11.423552 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerStarted","Data":"f233cfdbfd8ae96be208118bf4d667f20725f55748c7d7e2f273e8c3f12f44d4"} Jan 28 19:28:11 crc kubenswrapper[4985]: I0128 19:28:11.431777 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerStarted","Data":"5508c07a73c0a5675698c73285af2e9603f79d518f2dfc72f90fc1797df3fd73"} Jan 28 19:28:11 crc kubenswrapper[4985]: I0128 19:28:11.465736 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g5d6k" podStartSLOduration=2.82545814 podStartE2EDuration="6.465717258s" podCreationTimestamp="2026-01-28 19:28:05 +0000 UTC" firstStartedPulling="2026-01-28 19:28:07.366815125 +0000 UTC m=+4498.193377956" lastFinishedPulling="2026-01-28 19:28:11.007074243 +0000 UTC m=+4501.833637074" observedRunningTime="2026-01-28 19:28:11.462144567 +0000 UTC m=+4502.288707478" watchObservedRunningTime="2026-01-28 19:28:11.465717258 +0000 UTC m=+4502.292280089" Jan 28 19:28:13 crc kubenswrapper[4985]: I0128 19:28:13.456810 4985 generic.go:334] "Generic (PLEG): container finished" podID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerID="f233cfdbfd8ae96be208118bf4d667f20725f55748c7d7e2f273e8c3f12f44d4" exitCode=0 Jan 28 19:28:13 crc kubenswrapper[4985]: I0128 19:28:13.456892 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerDied","Data":"f233cfdbfd8ae96be208118bf4d667f20725f55748c7d7e2f273e8c3f12f44d4"} Jan 28 19:28:14 crc kubenswrapper[4985]: I0128 19:28:14.476123 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerStarted","Data":"23414830f730e9c3568e5d8028f59964e25d3291603706489ec85f15964ff5fc"} Jan 28 19:28:14 crc kubenswrapper[4985]: I0128 19:28:14.521458 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dwkk7" podStartSLOduration=3.04157915 podStartE2EDuration="7.521431288s" podCreationTimestamp="2026-01-28 19:28:07 +0000 UTC" firstStartedPulling="2026-01-28 19:28:09.401754546 +0000 UTC m=+4500.228317367" lastFinishedPulling="2026-01-28 19:28:13.881606654 +0000 UTC m=+4504.708169505" observedRunningTime="2026-01-28 19:28:14.500228688 +0000 UTC m=+4505.326791549" watchObservedRunningTime="2026-01-28 19:28:14.521431288 +0000 UTC m=+4505.347994109" Jan 28 19:28:16 crc kubenswrapper[4985]: I0128 19:28:16.220653 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:16 crc kubenswrapper[4985]: I0128 19:28:16.221043 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:17 crc kubenswrapper[4985]: I0128 19:28:17.272788 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g5d6k" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="registry-server" probeResult="failure" output=< Jan 28 19:28:17 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:28:17 crc kubenswrapper[4985]: > Jan 28 19:28:17 crc kubenswrapper[4985]: I0128 19:28:17.431180 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:17 crc kubenswrapper[4985]: I0128 19:28:17.431244 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:17 crc kubenswrapper[4985]: I0128 19:28:17.583561 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:20 crc kubenswrapper[4985]: I0128 19:28:20.265061 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:28:20 crc kubenswrapper[4985]: E0128 19:28:20.266128 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:28:26 crc kubenswrapper[4985]: I0128 19:28:26.272886 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:26 crc kubenswrapper[4985]: I0128 19:28:26.323791 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:26 crc kubenswrapper[4985]: I0128 19:28:26.516776 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g5d6k"] Jan 28 19:28:27 crc kubenswrapper[4985]: I0128 19:28:27.488568 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:27 crc kubenswrapper[4985]: I0128 19:28:27.630070 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g5d6k" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="registry-server" containerID="cri-o://5508c07a73c0a5675698c73285af2e9603f79d518f2dfc72f90fc1797df3fd73" gracePeriod=2 Jan 28 19:28:28 crc kubenswrapper[4985]: E0128 19:28:28.320043 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bd660cc_bac3_40a2_baf1_d27477b66355.slice/crio-5508c07a73c0a5675698c73285af2e9603f79d518f2dfc72f90fc1797df3fd73.scope\": RecentStats: unable to find data in memory cache]" Jan 28 19:28:28 crc kubenswrapper[4985]: I0128 19:28:28.651319 4985 generic.go:334] "Generic (PLEG): container finished" podID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerID="5508c07a73c0a5675698c73285af2e9603f79d518f2dfc72f90fc1797df3fd73" exitCode=0 Jan 28 19:28:28 crc kubenswrapper[4985]: I0128 19:28:28.651428 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerDied","Data":"5508c07a73c0a5675698c73285af2e9603f79d518f2dfc72f90fc1797df3fd73"} Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.085213 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.200043 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-utilities\") pod \"7bd660cc-bac3-40a2-baf1-d27477b66355\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.200108 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-catalog-content\") pod \"7bd660cc-bac3-40a2-baf1-d27477b66355\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.200310 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65w6x\" (UniqueName: \"kubernetes.io/projected/7bd660cc-bac3-40a2-baf1-d27477b66355-kube-api-access-65w6x\") pod \"7bd660cc-bac3-40a2-baf1-d27477b66355\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.201234 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-utilities" (OuterVolumeSpecName: "utilities") pod "7bd660cc-bac3-40a2-baf1-d27477b66355" (UID: "7bd660cc-bac3-40a2-baf1-d27477b66355"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.201770 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.207832 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd660cc-bac3-40a2-baf1-d27477b66355-kube-api-access-65w6x" (OuterVolumeSpecName: "kube-api-access-65w6x") pod "7bd660cc-bac3-40a2-baf1-d27477b66355" (UID: "7bd660cc-bac3-40a2-baf1-d27477b66355"). InnerVolumeSpecName "kube-api-access-65w6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.256603 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bd660cc-bac3-40a2-baf1-d27477b66355" (UID: "7bd660cc-bac3-40a2-baf1-d27477b66355"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.305242 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.305287 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65w6x\" (UniqueName: \"kubernetes.io/projected/7bd660cc-bac3-40a2-baf1-d27477b66355-kube-api-access-65w6x\") on node \"crc\" DevicePath \"\"" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.668445 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerDied","Data":"f995d9e0fe7cc52e4e2477b23584afbe7acdcdaaff398007005dc0deaba49a75"} Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.668501 4985 scope.go:117] "RemoveContainer" containerID="5508c07a73c0a5675698c73285af2e9603f79d518f2dfc72f90fc1797df3fd73" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.668659 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.707940 4985 scope.go:117] "RemoveContainer" containerID="4152802d09478a45d44a174e418e640afbf94234635886a9d8d380306df85929" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.708656 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g5d6k"] Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.730811 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g5d6k"] Jan 28 19:28:30 crc kubenswrapper[4985]: I0128 19:28:30.148909 4985 scope.go:117] "RemoveContainer" containerID="69d4e05fa8611628adda8b6890905569708e909b85dd0cae338b974b7963ab20" Jan 28 19:28:30 crc kubenswrapper[4985]: I0128 19:28:30.316556 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dwkk7"] Jan 28 19:28:30 crc kubenswrapper[4985]: I0128 19:28:30.316855 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dwkk7" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="registry-server" containerID="cri-o://23414830f730e9c3568e5d8028f59964e25d3291603706489ec85f15964ff5fc" gracePeriod=2 Jan 28 19:28:30 crc kubenswrapper[4985]: I0128 19:28:30.697379 4985 generic.go:334] "Generic (PLEG): container finished" podID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerID="23414830f730e9c3568e5d8028f59964e25d3291603706489ec85f15964ff5fc" exitCode=0 Jan 28 19:28:30 crc kubenswrapper[4985]: I0128 19:28:30.697743 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerDied","Data":"23414830f730e9c3568e5d8028f59964e25d3291603706489ec85f15964ff5fc"} Jan 28 19:28:30 crc kubenswrapper[4985]: I0128 19:28:30.917178 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.048304 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-catalog-content\") pod \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.048458 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx6ks\" (UniqueName: \"kubernetes.io/projected/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-kube-api-access-vx6ks\") pod \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.048571 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-utilities\") pod \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.049396 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-utilities" (OuterVolumeSpecName: "utilities") pod "15cde5ed-b5df-4ebd-9dc3-417d405ad81e" (UID: "15cde5ed-b5df-4ebd-9dc3-417d405ad81e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.055581 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-kube-api-access-vx6ks" (OuterVolumeSpecName: "kube-api-access-vx6ks") pod "15cde5ed-b5df-4ebd-9dc3-417d405ad81e" (UID: "15cde5ed-b5df-4ebd-9dc3-417d405ad81e"). InnerVolumeSpecName "kube-api-access-vx6ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.100431 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15cde5ed-b5df-4ebd-9dc3-417d405ad81e" (UID: "15cde5ed-b5df-4ebd-9dc3-417d405ad81e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.151037 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.151294 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx6ks\" (UniqueName: \"kubernetes.io/projected/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-kube-api-access-vx6ks\") on node \"crc\" DevicePath \"\"" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.151371 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.282617 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" path="/var/lib/kubelet/pods/7bd660cc-bac3-40a2-baf1-d27477b66355/volumes" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.713360 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerDied","Data":"5d8d8e16e03ffc2f078f992a22dea1222e612d0595de642ee60d2ae1e024af47"} Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.713416 4985 scope.go:117] "RemoveContainer" containerID="23414830f730e9c3568e5d8028f59964e25d3291603706489ec85f15964ff5fc" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.713472 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.740290 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dwkk7"] Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.741342 4985 scope.go:117] "RemoveContainer" containerID="f233cfdbfd8ae96be208118bf4d667f20725f55748c7d7e2f273e8c3f12f44d4" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.755375 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dwkk7"] Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.769803 4985 scope.go:117] "RemoveContainer" containerID="f7e71cc3aa266e86642df0368ccd0be0c9024e06e8dd76ed47af29f9b0389fba" Jan 28 19:28:32 crc kubenswrapper[4985]: I0128 19:28:32.265311 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:28:32 crc kubenswrapper[4985]: E0128 19:28:32.265727 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:28:33 crc kubenswrapper[4985]: I0128 19:28:33.279000 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" path="/var/lib/kubelet/pods/15cde5ed-b5df-4ebd-9dc3-417d405ad81e/volumes" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.146944 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h9jhp"] Jan 28 19:28:43 crc kubenswrapper[4985]: E0128 19:28:43.148359 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="registry-server" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148381 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="registry-server" Jan 28 19:28:43 crc kubenswrapper[4985]: E0128 19:28:43.148411 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="registry-server" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148422 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="registry-server" Jan 28 19:28:43 crc kubenswrapper[4985]: E0128 19:28:43.148438 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="extract-utilities" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148449 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="extract-utilities" Jan 28 19:28:43 crc kubenswrapper[4985]: E0128 19:28:43.148470 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="extract-content" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148480 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="extract-content" Jan 28 19:28:43 crc kubenswrapper[4985]: E0128 19:28:43.148511 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="extract-utilities" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148522 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="extract-utilities" Jan 28 19:28:43 crc kubenswrapper[4985]: E0128 19:28:43.148578 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="extract-content" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148591 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="extract-content" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148968 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="registry-server" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.149015 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="registry-server" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.152068 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.161294 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h9jhp"] Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.265129 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-catalog-content\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.265642 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-utilities\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.265689 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6hhr\" (UniqueName: \"kubernetes.io/projected/79e005da-4531-450b-a74b-ff8d59a5d3cd-kube-api-access-k6hhr\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.368120 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-catalog-content\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.368251 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-utilities\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.368332 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6hhr\" (UniqueName: \"kubernetes.io/projected/79e005da-4531-450b-a74b-ff8d59a5d3cd-kube-api-access-k6hhr\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.369027 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-utilities\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.369019 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-catalog-content\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.416242 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6hhr\" (UniqueName: \"kubernetes.io/projected/79e005da-4531-450b-a74b-ff8d59a5d3cd-kube-api-access-k6hhr\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.475370 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:44 crc kubenswrapper[4985]: I0128 19:28:44.083376 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h9jhp"] Jan 28 19:28:44 crc kubenswrapper[4985]: I0128 19:28:44.881406 4985 generic.go:334] "Generic (PLEG): container finished" podID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerID="9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e" exitCode=0 Jan 28 19:28:44 crc kubenswrapper[4985]: I0128 19:28:44.881678 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerDied","Data":"9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e"} Jan 28 19:28:44 crc kubenswrapper[4985]: I0128 19:28:44.881712 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerStarted","Data":"efb6ed0d1fc336a0b4e1274c9acba02fb5e05bd0a081461a30985004bf135538"} Jan 28 19:28:45 crc kubenswrapper[4985]: I0128 19:28:45.897895 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerStarted","Data":"2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157"} Jan 28 19:28:47 crc kubenswrapper[4985]: I0128 19:28:47.268254 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:28:47 crc kubenswrapper[4985]: E0128 19:28:47.270113 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:28:51 crc kubenswrapper[4985]: I0128 19:28:51.964900 4985 generic.go:334] "Generic (PLEG): container finished" podID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerID="2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157" exitCode=0 Jan 28 19:28:51 crc kubenswrapper[4985]: I0128 19:28:51.964958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerDied","Data":"2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157"} Jan 28 19:28:52 crc kubenswrapper[4985]: I0128 19:28:52.997007 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerStarted","Data":"c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d"} Jan 28 19:28:53 crc kubenswrapper[4985]: I0128 19:28:53.029169 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h9jhp" podStartSLOduration=2.519334965 podStartE2EDuration="10.029143743s" podCreationTimestamp="2026-01-28 19:28:43 +0000 UTC" firstStartedPulling="2026-01-28 19:28:44.885239993 +0000 UTC m=+4535.711802814" lastFinishedPulling="2026-01-28 19:28:52.395048771 +0000 UTC m=+4543.221611592" observedRunningTime="2026-01-28 19:28:53.018585984 +0000 UTC m=+4543.845148815" watchObservedRunningTime="2026-01-28 19:28:53.029143743 +0000 UTC m=+4543.855706564" Jan 28 19:28:53 crc kubenswrapper[4985]: I0128 19:28:53.476792 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:53 crc kubenswrapper[4985]: I0128 19:28:53.476842 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:54 crc kubenswrapper[4985]: I0128 19:28:54.711003 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h9jhp" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" probeResult="failure" output=< Jan 28 19:28:54 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:28:54 crc kubenswrapper[4985]: > Jan 28 19:29:00 crc kubenswrapper[4985]: I0128 19:29:00.263900 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:29:00 crc kubenswrapper[4985]: E0128 19:29:00.264688 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:29:04 crc kubenswrapper[4985]: I0128 19:29:04.531918 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h9jhp" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" probeResult="failure" output=< Jan 28 19:29:04 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:29:04 crc kubenswrapper[4985]: > Jan 28 19:29:12 crc kubenswrapper[4985]: I0128 19:29:12.264457 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:29:12 crc kubenswrapper[4985]: E0128 19:29:12.265296 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:29:14 crc kubenswrapper[4985]: I0128 19:29:14.695010 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h9jhp" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" probeResult="failure" output=< Jan 28 19:29:14 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:29:14 crc kubenswrapper[4985]: > Jan 28 19:29:23 crc kubenswrapper[4985]: I0128 19:29:23.556525 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:29:23 crc kubenswrapper[4985]: I0128 19:29:23.620628 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:29:24 crc kubenswrapper[4985]: I0128 19:29:24.358836 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h9jhp"] Jan 28 19:29:25 crc kubenswrapper[4985]: I0128 19:29:25.264625 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:29:25 crc kubenswrapper[4985]: E0128 19:29:25.265311 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:29:25 crc kubenswrapper[4985]: I0128 19:29:25.332506 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h9jhp" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" containerID="cri-o://c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d" gracePeriod=2 Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.287789 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.353710 4985 generic.go:334] "Generic (PLEG): container finished" podID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerID="c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d" exitCode=0 Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.353761 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerDied","Data":"c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d"} Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.353788 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerDied","Data":"efb6ed0d1fc336a0b4e1274c9acba02fb5e05bd0a081461a30985004bf135538"} Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.353790 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.353804 4985 scope.go:117] "RemoveContainer" containerID="c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.382510 4985 scope.go:117] "RemoveContainer" containerID="2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.414320 4985 scope.go:117] "RemoveContainer" containerID="9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.448813 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-catalog-content\") pod \"79e005da-4531-450b-a74b-ff8d59a5d3cd\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.448952 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-utilities\") pod \"79e005da-4531-450b-a74b-ff8d59a5d3cd\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.449032 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6hhr\" (UniqueName: \"kubernetes.io/projected/79e005da-4531-450b-a74b-ff8d59a5d3cd-kube-api-access-k6hhr\") pod \"79e005da-4531-450b-a74b-ff8d59a5d3cd\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.450101 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-utilities" (OuterVolumeSpecName: "utilities") pod "79e005da-4531-450b-a74b-ff8d59a5d3cd" (UID: "79e005da-4531-450b-a74b-ff8d59a5d3cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.455619 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e005da-4531-450b-a74b-ff8d59a5d3cd-kube-api-access-k6hhr" (OuterVolumeSpecName: "kube-api-access-k6hhr") pod "79e005da-4531-450b-a74b-ff8d59a5d3cd" (UID: "79e005da-4531-450b-a74b-ff8d59a5d3cd"). InnerVolumeSpecName "kube-api-access-k6hhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.532233 4985 scope.go:117] "RemoveContainer" containerID="c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d" Jan 28 19:29:26 crc kubenswrapper[4985]: E0128 19:29:26.533298 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d\": container with ID starting with c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d not found: ID does not exist" containerID="c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.533463 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d"} err="failed to get container status \"c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d\": rpc error: code = NotFound desc = could not find container \"c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d\": container with ID starting with c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d not found: ID does not exist" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.533563 4985 scope.go:117] "RemoveContainer" containerID="2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157" Jan 28 19:29:26 crc kubenswrapper[4985]: E0128 19:29:26.534081 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157\": container with ID starting with 2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157 not found: ID does not exist" containerID="2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.534123 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157"} err="failed to get container status \"2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157\": rpc error: code = NotFound desc = could not find container \"2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157\": container with ID starting with 2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157 not found: ID does not exist" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.534154 4985 scope.go:117] "RemoveContainer" containerID="9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e" Jan 28 19:29:26 crc kubenswrapper[4985]: E0128 19:29:26.534524 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e\": container with ID starting with 9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e not found: ID does not exist" containerID="9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.534575 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e"} err="failed to get container status \"9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e\": rpc error: code = NotFound desc = could not find container \"9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e\": container with ID starting with 9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e not found: ID does not exist" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.551851 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.552101 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6hhr\" (UniqueName: \"kubernetes.io/projected/79e005da-4531-450b-a74b-ff8d59a5d3cd-kube-api-access-k6hhr\") on node \"crc\" DevicePath \"\"" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.565038 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79e005da-4531-450b-a74b-ff8d59a5d3cd" (UID: "79e005da-4531-450b-a74b-ff8d59a5d3cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.654729 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.696763 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h9jhp"] Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.707444 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h9jhp"] Jan 28 19:29:27 crc kubenswrapper[4985]: I0128 19:29:27.279877 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" path="/var/lib/kubelet/pods/79e005da-4531-450b-a74b-ff8d59a5d3cd/volumes" Jan 28 19:29:39 crc kubenswrapper[4985]: I0128 19:29:39.264719 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:29:39 crc kubenswrapper[4985]: E0128 19:29:39.265616 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:29:50 crc kubenswrapper[4985]: I0128 19:29:50.264329 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:29:50 crc kubenswrapper[4985]: E0128 19:29:50.265550 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.171727 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld"] Jan 28 19:30:00 crc kubenswrapper[4985]: E0128 19:30:00.172840 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="extract-utilities" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.172858 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="extract-utilities" Jan 28 19:30:00 crc kubenswrapper[4985]: E0128 19:30:00.172881 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="extract-content" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.172888 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="extract-content" Jan 28 19:30:00 crc kubenswrapper[4985]: E0128 19:30:00.172909 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.172917 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.173167 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.174272 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.177167 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.178635 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.186554 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld"] Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.282174 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89dlw\" (UniqueName: \"kubernetes.io/projected/2bbf5b95-eb34-48ce-970a-48eec581f83b-kube-api-access-89dlw\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.282537 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bbf5b95-eb34-48ce-970a-48eec581f83b-config-volume\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.282701 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bbf5b95-eb34-48ce-970a-48eec581f83b-secret-volume\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.385448 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bbf5b95-eb34-48ce-970a-48eec581f83b-secret-volume\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.385687 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89dlw\" (UniqueName: \"kubernetes.io/projected/2bbf5b95-eb34-48ce-970a-48eec581f83b-kube-api-access-89dlw\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.385808 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bbf5b95-eb34-48ce-970a-48eec581f83b-config-volume\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.387883 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bbf5b95-eb34-48ce-970a-48eec581f83b-config-volume\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.933795 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bbf5b95-eb34-48ce-970a-48eec581f83b-secret-volume\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.934098 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89dlw\" (UniqueName: \"kubernetes.io/projected/2bbf5b95-eb34-48ce-970a-48eec581f83b-kube-api-access-89dlw\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:01 crc kubenswrapper[4985]: I0128 19:30:01.097454 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:01 crc kubenswrapper[4985]: I0128 19:30:01.758388 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld"] Jan 28 19:30:02 crc kubenswrapper[4985]: I0128 19:30:02.265402 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:30:02 crc kubenswrapper[4985]: E0128 19:30:02.266642 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:30:02 crc kubenswrapper[4985]: I0128 19:30:02.734021 4985 generic.go:334] "Generic (PLEG): container finished" podID="2bbf5b95-eb34-48ce-970a-48eec581f83b" containerID="6c8e48c972aa2e298f7430451a2f30fabf8f72218697856b1aa3451401eef4e3" exitCode=0 Jan 28 19:30:02 crc kubenswrapper[4985]: I0128 19:30:02.734106 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" event={"ID":"2bbf5b95-eb34-48ce-970a-48eec581f83b","Type":"ContainerDied","Data":"6c8e48c972aa2e298f7430451a2f30fabf8f72218697856b1aa3451401eef4e3"} Jan 28 19:30:02 crc kubenswrapper[4985]: I0128 19:30:02.734135 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" event={"ID":"2bbf5b95-eb34-48ce-970a-48eec581f83b","Type":"ContainerStarted","Data":"fa49be7150b1c0c3de249e0f82a0d01e7d454a343f93122c77700aaa7b38c1fb"} Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.319639 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.391946 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89dlw\" (UniqueName: \"kubernetes.io/projected/2bbf5b95-eb34-48ce-970a-48eec581f83b-kube-api-access-89dlw\") pod \"2bbf5b95-eb34-48ce-970a-48eec581f83b\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.392126 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bbf5b95-eb34-48ce-970a-48eec581f83b-config-volume\") pod \"2bbf5b95-eb34-48ce-970a-48eec581f83b\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.392235 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bbf5b95-eb34-48ce-970a-48eec581f83b-secret-volume\") pod \"2bbf5b95-eb34-48ce-970a-48eec581f83b\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.394087 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bbf5b95-eb34-48ce-970a-48eec581f83b-config-volume" (OuterVolumeSpecName: "config-volume") pod "2bbf5b95-eb34-48ce-970a-48eec581f83b" (UID: "2bbf5b95-eb34-48ce-970a-48eec581f83b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.397792 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbf5b95-eb34-48ce-970a-48eec581f83b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2bbf5b95-eb34-48ce-970a-48eec581f83b" (UID: "2bbf5b95-eb34-48ce-970a-48eec581f83b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.397910 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bbf5b95-eb34-48ce-970a-48eec581f83b-kube-api-access-89dlw" (OuterVolumeSpecName: "kube-api-access-89dlw") pod "2bbf5b95-eb34-48ce-970a-48eec581f83b" (UID: "2bbf5b95-eb34-48ce-970a-48eec581f83b"). InnerVolumeSpecName "kube-api-access-89dlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.495096 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89dlw\" (UniqueName: \"kubernetes.io/projected/2bbf5b95-eb34-48ce-970a-48eec581f83b-kube-api-access-89dlw\") on node \"crc\" DevicePath \"\"" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.495135 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bbf5b95-eb34-48ce-970a-48eec581f83b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.495148 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bbf5b95-eb34-48ce-970a-48eec581f83b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.756474 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" event={"ID":"2bbf5b95-eb34-48ce-970a-48eec581f83b","Type":"ContainerDied","Data":"fa49be7150b1c0c3de249e0f82a0d01e7d454a343f93122c77700aaa7b38c1fb"} Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.756528 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.756538 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa49be7150b1c0c3de249e0f82a0d01e7d454a343f93122c77700aaa7b38c1fb" Jan 28 19:30:05 crc kubenswrapper[4985]: I0128 19:30:05.411564 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx"] Jan 28 19:30:05 crc kubenswrapper[4985]: I0128 19:30:05.422188 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx"] Jan 28 19:30:07 crc kubenswrapper[4985]: I0128 19:30:07.282162 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62198283-1005-48a7-91a7-44d4240224ef" path="/var/lib/kubelet/pods/62198283-1005-48a7-91a7-44d4240224ef/volumes" Jan 28 19:30:15 crc kubenswrapper[4985]: I0128 19:30:15.265380 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:30:15 crc kubenswrapper[4985]: I0128 19:30:15.886954 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"7f1a663e4711d5d267c37ae57a46f3735e8e9b6974b9957aacc5ac3d58d3e675"} Jan 28 19:30:42 crc kubenswrapper[4985]: I0128 19:30:42.738378 4985 scope.go:117] "RemoveContainer" containerID="e7f4c4199443b277fce34519a5f0cc3daf60a217d86701b9fd4cb717d8480164" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.384837 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4pgtm"] Jan 28 19:31:22 crc kubenswrapper[4985]: E0128 19:31:22.386089 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bbf5b95-eb34-48ce-970a-48eec581f83b" containerName="collect-profiles" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.386106 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bbf5b95-eb34-48ce-970a-48eec581f83b" containerName="collect-profiles" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.386502 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bbf5b95-eb34-48ce-970a-48eec581f83b" containerName="collect-profiles" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.388598 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.405644 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pgtm"] Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.528863 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-utilities\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.528932 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-catalog-content\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.528970 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf42s\" (UniqueName: \"kubernetes.io/projected/3cc63e1e-427e-4268-bd2a-0137da7b65a9-kube-api-access-xf42s\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.632431 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-utilities\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.632491 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-catalog-content\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.632515 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf42s\" (UniqueName: \"kubernetes.io/projected/3cc63e1e-427e-4268-bd2a-0137da7b65a9-kube-api-access-xf42s\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.633484 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-utilities\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.633589 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-catalog-content\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.659034 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf42s\" (UniqueName: \"kubernetes.io/projected/3cc63e1e-427e-4268-bd2a-0137da7b65a9-kube-api-access-xf42s\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.767960 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:23 crc kubenswrapper[4985]: I0128 19:31:23.293442 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pgtm"] Jan 28 19:31:23 crc kubenswrapper[4985]: W0128 19:31:23.299623 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cc63e1e_427e_4268_bd2a_0137da7b65a9.slice/crio-761512f5c759e39a3d23c6ac2f7ed526651b7536e7f848c0e7f354de9dc8954d WatchSource:0}: Error finding container 761512f5c759e39a3d23c6ac2f7ed526651b7536e7f848c0e7f354de9dc8954d: Status 404 returned error can't find the container with id 761512f5c759e39a3d23c6ac2f7ed526651b7536e7f848c0e7f354de9dc8954d Jan 28 19:31:23 crc kubenswrapper[4985]: I0128 19:31:23.725227 4985 generic.go:334] "Generic (PLEG): container finished" podID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerID="411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12" exitCode=0 Jan 28 19:31:23 crc kubenswrapper[4985]: I0128 19:31:23.725291 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerDied","Data":"411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12"} Jan 28 19:31:23 crc kubenswrapper[4985]: I0128 19:31:23.725318 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerStarted","Data":"761512f5c759e39a3d23c6ac2f7ed526651b7536e7f848c0e7f354de9dc8954d"} Jan 28 19:31:24 crc kubenswrapper[4985]: I0128 19:31:24.742410 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerStarted","Data":"a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880"} Jan 28 19:31:26 crc kubenswrapper[4985]: I0128 19:31:26.766822 4985 generic.go:334] "Generic (PLEG): container finished" podID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerID="a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880" exitCode=0 Jan 28 19:31:26 crc kubenswrapper[4985]: I0128 19:31:26.766954 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerDied","Data":"a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880"} Jan 28 19:31:27 crc kubenswrapper[4985]: I0128 19:31:27.782413 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerStarted","Data":"b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9"} Jan 28 19:31:27 crc kubenswrapper[4985]: I0128 19:31:27.800631 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4pgtm" podStartSLOduration=2.380400708 podStartE2EDuration="5.800610837s" podCreationTimestamp="2026-01-28 19:31:22 +0000 UTC" firstStartedPulling="2026-01-28 19:31:23.729387027 +0000 UTC m=+4694.555949878" lastFinishedPulling="2026-01-28 19:31:27.149597146 +0000 UTC m=+4697.976160007" observedRunningTime="2026-01-28 19:31:27.798274211 +0000 UTC m=+4698.624837042" watchObservedRunningTime="2026-01-28 19:31:27.800610837 +0000 UTC m=+4698.627173648" Jan 28 19:31:32 crc kubenswrapper[4985]: I0128 19:31:32.768949 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:32 crc kubenswrapper[4985]: I0128 19:31:32.769547 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:32 crc kubenswrapper[4985]: I0128 19:31:32.834090 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:32 crc kubenswrapper[4985]: I0128 19:31:32.923721 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:33 crc kubenswrapper[4985]: I0128 19:31:33.087950 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pgtm"] Jan 28 19:31:34 crc kubenswrapper[4985]: I0128 19:31:34.869081 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4pgtm" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="registry-server" containerID="cri-o://b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9" gracePeriod=2 Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.522599 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.589778 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-catalog-content\") pod \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.590160 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-utilities\") pod \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.590241 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf42s\" (UniqueName: \"kubernetes.io/projected/3cc63e1e-427e-4268-bd2a-0137da7b65a9-kube-api-access-xf42s\") pod \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.591278 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-utilities" (OuterVolumeSpecName: "utilities") pod "3cc63e1e-427e-4268-bd2a-0137da7b65a9" (UID: "3cc63e1e-427e-4268-bd2a-0137da7b65a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.599336 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc63e1e-427e-4268-bd2a-0137da7b65a9-kube-api-access-xf42s" (OuterVolumeSpecName: "kube-api-access-xf42s") pod "3cc63e1e-427e-4268-bd2a-0137da7b65a9" (UID: "3cc63e1e-427e-4268-bd2a-0137da7b65a9"). InnerVolumeSpecName "kube-api-access-xf42s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.628202 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cc63e1e-427e-4268-bd2a-0137da7b65a9" (UID: "3cc63e1e-427e-4268-bd2a-0137da7b65a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.692889 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf42s\" (UniqueName: \"kubernetes.io/projected/3cc63e1e-427e-4268-bd2a-0137da7b65a9-kube-api-access-xf42s\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.692943 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.692953 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.886119 4985 generic.go:334] "Generic (PLEG): container finished" podID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerID="b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9" exitCode=0 Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.886167 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerDied","Data":"b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9"} Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.886205 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerDied","Data":"761512f5c759e39a3d23c6ac2f7ed526651b7536e7f848c0e7f354de9dc8954d"} Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.886375 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.886226 4985 scope.go:117] "RemoveContainer" containerID="b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.922301 4985 scope.go:117] "RemoveContainer" containerID="a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.945697 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pgtm"] Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.956887 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pgtm"] Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.958052 4985 scope.go:117] "RemoveContainer" containerID="411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12" Jan 28 19:31:36 crc kubenswrapper[4985]: I0128 19:31:36.011417 4985 scope.go:117] "RemoveContainer" containerID="b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9" Jan 28 19:31:36 crc kubenswrapper[4985]: E0128 19:31:36.012024 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9\": container with ID starting with b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9 not found: ID does not exist" containerID="b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9" Jan 28 19:31:36 crc kubenswrapper[4985]: I0128 19:31:36.012058 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9"} err="failed to get container status \"b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9\": rpc error: code = NotFound desc = could not find container \"b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9\": container with ID starting with b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9 not found: ID does not exist" Jan 28 19:31:36 crc kubenswrapper[4985]: I0128 19:31:36.012080 4985 scope.go:117] "RemoveContainer" containerID="a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880" Jan 28 19:31:36 crc kubenswrapper[4985]: E0128 19:31:36.012622 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880\": container with ID starting with a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880 not found: ID does not exist" containerID="a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880" Jan 28 19:31:36 crc kubenswrapper[4985]: I0128 19:31:36.012668 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880"} err="failed to get container status \"a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880\": rpc error: code = NotFound desc = could not find container \"a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880\": container with ID starting with a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880 not found: ID does not exist" Jan 28 19:31:36 crc kubenswrapper[4985]: I0128 19:31:36.012697 4985 scope.go:117] "RemoveContainer" containerID="411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12" Jan 28 19:31:36 crc kubenswrapper[4985]: E0128 19:31:36.013026 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12\": container with ID starting with 411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12 not found: ID does not exist" containerID="411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12" Jan 28 19:31:36 crc kubenswrapper[4985]: I0128 19:31:36.013085 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12"} err="failed to get container status \"411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12\": rpc error: code = NotFound desc = could not find container \"411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12\": container with ID starting with 411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12 not found: ID does not exist" Jan 28 19:31:37 crc kubenswrapper[4985]: I0128 19:31:37.282930 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" path="/var/lib/kubelet/pods/3cc63e1e-427e-4268-bd2a-0137da7b65a9/volumes" Jan 28 19:32:41 crc kubenswrapper[4985]: I0128 19:32:41.186278 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:32:41 crc kubenswrapper[4985]: I0128 19:32:41.186951 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:33:11 crc kubenswrapper[4985]: I0128 19:33:11.186328 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:33:11 crc kubenswrapper[4985]: I0128 19:33:11.187080 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.185934 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.186540 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.186599 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.187302 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f1a663e4711d5d267c37ae57a46f3735e8e9b6974b9957aacc5ac3d58d3e675"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.187367 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://7f1a663e4711d5d267c37ae57a46f3735e8e9b6974b9957aacc5ac3d58d3e675" gracePeriod=600 Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.570911 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="7f1a663e4711d5d267c37ae57a46f3735e8e9b6974b9957aacc5ac3d58d3e675" exitCode=0 Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.571040 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"7f1a663e4711d5d267c37ae57a46f3735e8e9b6974b9957aacc5ac3d58d3e675"} Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.571469 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:33:42 crc kubenswrapper[4985]: I0128 19:33:42.589552 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e"} Jan 28 19:35:41 crc kubenswrapper[4985]: I0128 19:35:41.185992 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:35:41 crc kubenswrapper[4985]: I0128 19:35:41.186551 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:36:11 crc kubenswrapper[4985]: I0128 19:36:11.186500 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:36:11 crc kubenswrapper[4985]: I0128 19:36:11.187178 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.186582 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.187189 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.187267 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.188347 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.188437 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" gracePeriod=600 Jan 28 19:36:41 crc kubenswrapper[4985]: E0128 19:36:41.308390 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.901514 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" exitCode=0 Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.901571 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e"} Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.901611 4985 scope.go:117] "RemoveContainer" containerID="7f1a663e4711d5d267c37ae57a46f3735e8e9b6974b9957aacc5ac3d58d3e675" Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.906144 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:36:41 crc kubenswrapper[4985]: E0128 19:36:41.907844 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:36:53 crc kubenswrapper[4985]: I0128 19:36:53.264201 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:36:53 crc kubenswrapper[4985]: E0128 19:36:53.264974 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:37:08 crc kubenswrapper[4985]: I0128 19:37:08.265293 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:37:08 crc kubenswrapper[4985]: E0128 19:37:08.266528 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:37:22 crc kubenswrapper[4985]: I0128 19:37:22.298700 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:37:22 crc kubenswrapper[4985]: E0128 19:37:22.300944 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:37:33 crc kubenswrapper[4985]: I0128 19:37:33.264155 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:37:33 crc kubenswrapper[4985]: E0128 19:37:33.264923 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:37:47 crc kubenswrapper[4985]: I0128 19:37:47.265653 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:37:47 crc kubenswrapper[4985]: E0128 19:37:47.267488 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:38:00 crc kubenswrapper[4985]: I0128 19:38:00.239562 4985 trace.go:236] Trace[1046175934]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-0" (28-Jan-2026 19:37:59.179) (total time: 1059ms): Jan 28 19:38:00 crc kubenswrapper[4985]: Trace[1046175934]: [1.059137805s] [1.059137805s] END Jan 28 19:38:02 crc kubenswrapper[4985]: I0128 19:38:02.264959 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:38:02 crc kubenswrapper[4985]: E0128 19:38:02.266006 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.606534 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n5dzq"] Jan 28 19:38:16 crc kubenswrapper[4985]: E0128 19:38:16.607732 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="extract-utilities" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.607751 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="extract-utilities" Jan 28 19:38:16 crc kubenswrapper[4985]: E0128 19:38:16.607774 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="registry-server" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.607782 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="registry-server" Jan 28 19:38:16 crc kubenswrapper[4985]: E0128 19:38:16.607832 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="extract-content" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.607840 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="extract-content" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.608120 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="registry-server" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.610303 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.645832 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-utilities\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.646121 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-catalog-content\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.646244 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh6vr\" (UniqueName: \"kubernetes.io/projected/8194ba08-4eee-42cf-90e5-997fed0b6208-kube-api-access-mh6vr\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.647459 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5dzq"] Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.749663 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-utilities\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.749803 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-catalog-content\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.749934 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh6vr\" (UniqueName: \"kubernetes.io/projected/8194ba08-4eee-42cf-90e5-997fed0b6208-kube-api-access-mh6vr\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.750674 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-utilities\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.750716 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-catalog-content\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.776685 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh6vr\" (UniqueName: \"kubernetes.io/projected/8194ba08-4eee-42cf-90e5-997fed0b6208-kube-api-access-mh6vr\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.948692 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:17 crc kubenswrapper[4985]: I0128 19:38:17.264330 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:38:17 crc kubenswrapper[4985]: E0128 19:38:17.265864 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:38:17 crc kubenswrapper[4985]: W0128 19:38:17.519990 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8194ba08_4eee_42cf_90e5_997fed0b6208.slice/crio-0244b45b0b3d3dd02980d7de91a9ca68aa56b8f93ce2d6b2fdfaf9c5c6ef80da WatchSource:0}: Error finding container 0244b45b0b3d3dd02980d7de91a9ca68aa56b8f93ce2d6b2fdfaf9c5c6ef80da: Status 404 returned error can't find the container with id 0244b45b0b3d3dd02980d7de91a9ca68aa56b8f93ce2d6b2fdfaf9c5c6ef80da Jan 28 19:38:17 crc kubenswrapper[4985]: I0128 19:38:17.520043 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5dzq"] Jan 28 19:38:18 crc kubenswrapper[4985]: I0128 19:38:18.174237 4985 generic.go:334] "Generic (PLEG): container finished" podID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerID="6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d" exitCode=0 Jan 28 19:38:18 crc kubenswrapper[4985]: I0128 19:38:18.174635 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerDied","Data":"6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d"} Jan 28 19:38:18 crc kubenswrapper[4985]: I0128 19:38:18.174922 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerStarted","Data":"0244b45b0b3d3dd02980d7de91a9ca68aa56b8f93ce2d6b2fdfaf9c5c6ef80da"} Jan 28 19:38:18 crc kubenswrapper[4985]: I0128 19:38:18.180942 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:38:19 crc kubenswrapper[4985]: I0128 19:38:19.190746 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerStarted","Data":"5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc"} Jan 28 19:38:21 crc kubenswrapper[4985]: I0128 19:38:21.218393 4985 generic.go:334] "Generic (PLEG): container finished" podID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerID="5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc" exitCode=0 Jan 28 19:38:21 crc kubenswrapper[4985]: I0128 19:38:21.218459 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerDied","Data":"5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc"} Jan 28 19:38:22 crc kubenswrapper[4985]: I0128 19:38:22.236646 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerStarted","Data":"ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229"} Jan 28 19:38:22 crc kubenswrapper[4985]: I0128 19:38:22.280845 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n5dzq" podStartSLOduration=2.7557350510000003 podStartE2EDuration="6.280817529s" podCreationTimestamp="2026-01-28 19:38:16 +0000 UTC" firstStartedPulling="2026-01-28 19:38:18.18065021 +0000 UTC m=+5109.007213041" lastFinishedPulling="2026-01-28 19:38:21.705732698 +0000 UTC m=+5112.532295519" observedRunningTime="2026-01-28 19:38:22.262806919 +0000 UTC m=+5113.089369740" watchObservedRunningTime="2026-01-28 19:38:22.280817529 +0000 UTC m=+5113.107380390" Jan 28 19:38:26 crc kubenswrapper[4985]: I0128 19:38:26.950310 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:26 crc kubenswrapper[4985]: I0128 19:38:26.950861 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:27 crc kubenswrapper[4985]: I0128 19:38:27.031230 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:27 crc kubenswrapper[4985]: I0128 19:38:27.356161 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:27 crc kubenswrapper[4985]: I0128 19:38:27.415459 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5dzq"] Jan 28 19:38:29 crc kubenswrapper[4985]: I0128 19:38:29.316354 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n5dzq" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="registry-server" containerID="cri-o://ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229" gracePeriod=2 Jan 28 19:38:29 crc kubenswrapper[4985]: I0128 19:38:29.913998 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.023066 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-utilities\") pod \"8194ba08-4eee-42cf-90e5-997fed0b6208\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.023141 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-catalog-content\") pod \"8194ba08-4eee-42cf-90e5-997fed0b6208\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.023338 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh6vr\" (UniqueName: \"kubernetes.io/projected/8194ba08-4eee-42cf-90e5-997fed0b6208-kube-api-access-mh6vr\") pod \"8194ba08-4eee-42cf-90e5-997fed0b6208\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.025442 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-utilities" (OuterVolumeSpecName: "utilities") pod "8194ba08-4eee-42cf-90e5-997fed0b6208" (UID: "8194ba08-4eee-42cf-90e5-997fed0b6208"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.031146 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8194ba08-4eee-42cf-90e5-997fed0b6208-kube-api-access-mh6vr" (OuterVolumeSpecName: "kube-api-access-mh6vr") pod "8194ba08-4eee-42cf-90e5-997fed0b6208" (UID: "8194ba08-4eee-42cf-90e5-997fed0b6208"). InnerVolumeSpecName "kube-api-access-mh6vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.097781 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8194ba08-4eee-42cf-90e5-997fed0b6208" (UID: "8194ba08-4eee-42cf-90e5-997fed0b6208"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.130550 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.130599 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.130615 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh6vr\" (UniqueName: \"kubernetes.io/projected/8194ba08-4eee-42cf-90e5-997fed0b6208-kube-api-access-mh6vr\") on node \"crc\" DevicePath \"\"" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.331398 4985 generic.go:334] "Generic (PLEG): container finished" podID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerID="ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229" exitCode=0 Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.331462 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerDied","Data":"ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229"} Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.331502 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerDied","Data":"0244b45b0b3d3dd02980d7de91a9ca68aa56b8f93ce2d6b2fdfaf9c5c6ef80da"} Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.331531 4985 scope.go:117] "RemoveContainer" containerID="ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.331734 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.356660 4985 scope.go:117] "RemoveContainer" containerID="5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.392833 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5dzq"] Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.400811 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n5dzq"] Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.404720 4985 scope.go:117] "RemoveContainer" containerID="6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.460857 4985 scope.go:117] "RemoveContainer" containerID="ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229" Jan 28 19:38:30 crc kubenswrapper[4985]: E0128 19:38:30.461399 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229\": container with ID starting with ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229 not found: ID does not exist" containerID="ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.461469 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229"} err="failed to get container status \"ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229\": rpc error: code = NotFound desc = could not find container \"ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229\": container with ID starting with ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229 not found: ID does not exist" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.461509 4985 scope.go:117] "RemoveContainer" containerID="5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc" Jan 28 19:38:30 crc kubenswrapper[4985]: E0128 19:38:30.461947 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc\": container with ID starting with 5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc not found: ID does not exist" containerID="5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.461999 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc"} err="failed to get container status \"5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc\": rpc error: code = NotFound desc = could not find container \"5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc\": container with ID starting with 5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc not found: ID does not exist" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.462038 4985 scope.go:117] "RemoveContainer" containerID="6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d" Jan 28 19:38:30 crc kubenswrapper[4985]: E0128 19:38:30.462421 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d\": container with ID starting with 6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d not found: ID does not exist" containerID="6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.462456 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d"} err="failed to get container status \"6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d\": rpc error: code = NotFound desc = could not find container \"6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d\": container with ID starting with 6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d not found: ID does not exist" Jan 28 19:38:31 crc kubenswrapper[4985]: I0128 19:38:31.276019 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:38:31 crc kubenswrapper[4985]: E0128 19:38:31.276735 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:38:31 crc kubenswrapper[4985]: I0128 19:38:31.280846 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" path="/var/lib/kubelet/pods/8194ba08-4eee-42cf-90e5-997fed0b6208/volumes" Jan 28 19:38:42 crc kubenswrapper[4985]: I0128 19:38:42.265025 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:38:42 crc kubenswrapper[4985]: E0128 19:38:42.265908 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:38:56 crc kubenswrapper[4985]: I0128 19:38:56.264166 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:38:56 crc kubenswrapper[4985]: E0128 19:38:56.265120 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:39:08 crc kubenswrapper[4985]: I0128 19:39:08.264289 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:39:08 crc kubenswrapper[4985]: E0128 19:39:08.265660 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:39:21 crc kubenswrapper[4985]: I0128 19:39:21.310275 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:39:21 crc kubenswrapper[4985]: E0128 19:39:21.314048 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:39:33 crc kubenswrapper[4985]: I0128 19:39:33.874621 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:39:33 crc kubenswrapper[4985]: E0128 19:39:33.876983 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:39:45 crc kubenswrapper[4985]: I0128 19:39:45.264659 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:39:45 crc kubenswrapper[4985]: E0128 19:39:45.267239 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.414451 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x7mbz"] Jan 28 19:39:52 crc kubenswrapper[4985]: E0128 19:39:52.416737 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="extract-content" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.416859 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="extract-content" Jan 28 19:39:52 crc kubenswrapper[4985]: E0128 19:39:52.416953 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="registry-server" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.417031 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="registry-server" Jan 28 19:39:52 crc kubenswrapper[4985]: E0128 19:39:52.417134 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="extract-utilities" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.417211 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="extract-utilities" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.417631 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="registry-server" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.419959 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.446375 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7mbz"] Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.455860 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgrr9\" (UniqueName: \"kubernetes.io/projected/c8200781-f798-46b5-bebe-e2703093cc9a-kube-api-access-xgrr9\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.456515 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-utilities\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.456716 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-catalog-content\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.559089 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgrr9\" (UniqueName: \"kubernetes.io/projected/c8200781-f798-46b5-bebe-e2703093cc9a-kube-api-access-xgrr9\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.559164 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-utilities\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.559220 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-catalog-content\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.560004 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-utilities\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.560039 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-catalog-content\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.592071 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgrr9\" (UniqueName: \"kubernetes.io/projected/c8200781-f798-46b5-bebe-e2703093cc9a-kube-api-access-xgrr9\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.759120 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:53 crc kubenswrapper[4985]: I0128 19:39:53.308880 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7mbz"] Jan 28 19:39:54 crc kubenswrapper[4985]: I0128 19:39:54.204685 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerStarted","Data":"f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7"} Jan 28 19:39:54 crc kubenswrapper[4985]: I0128 19:39:54.205018 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerStarted","Data":"c060460c2a544abd567f16e8afbf161a027f4391d1a578f020dab8a0d2e7a75e"} Jan 28 19:39:55 crc kubenswrapper[4985]: I0128 19:39:55.221464 4985 generic.go:334] "Generic (PLEG): container finished" podID="c8200781-f798-46b5-bebe-e2703093cc9a" containerID="f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7" exitCode=0 Jan 28 19:39:55 crc kubenswrapper[4985]: I0128 19:39:55.221558 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerDied","Data":"f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7"} Jan 28 19:39:55 crc kubenswrapper[4985]: I0128 19:39:55.222643 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerStarted","Data":"9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e"} Jan 28 19:39:57 crc kubenswrapper[4985]: I0128 19:39:57.264956 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:39:57 crc kubenswrapper[4985]: E0128 19:39:57.266017 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:40:05 crc kubenswrapper[4985]: I0128 19:40:05.349421 4985 generic.go:334] "Generic (PLEG): container finished" podID="c8200781-f798-46b5-bebe-e2703093cc9a" containerID="9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e" exitCode=0 Jan 28 19:40:05 crc kubenswrapper[4985]: I0128 19:40:05.349855 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerDied","Data":"9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e"} Jan 28 19:40:06 crc kubenswrapper[4985]: I0128 19:40:06.364384 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerStarted","Data":"dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344"} Jan 28 19:40:06 crc kubenswrapper[4985]: I0128 19:40:06.398201 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x7mbz" podStartSLOduration=2.8362278610000002 podStartE2EDuration="14.39818036s" podCreationTimestamp="2026-01-28 19:39:52 +0000 UTC" firstStartedPulling="2026-01-28 19:39:54.206869034 +0000 UTC m=+5205.033431855" lastFinishedPulling="2026-01-28 19:40:05.768821533 +0000 UTC m=+5216.595384354" observedRunningTime="2026-01-28 19:40:06.386292924 +0000 UTC m=+5217.212855745" watchObservedRunningTime="2026-01-28 19:40:06.39818036 +0000 UTC m=+5217.224743171" Jan 28 19:40:08 crc kubenswrapper[4985]: I0128 19:40:08.266801 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:40:08 crc kubenswrapper[4985]: E0128 19:40:08.267468 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:40:12 crc kubenswrapper[4985]: I0128 19:40:12.759878 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:40:12 crc kubenswrapper[4985]: I0128 19:40:12.760186 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:40:13 crc kubenswrapper[4985]: I0128 19:40:13.821050 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7mbz" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="registry-server" probeResult="failure" output=< Jan 28 19:40:13 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:40:13 crc kubenswrapper[4985]: > Jan 28 19:40:21 crc kubenswrapper[4985]: I0128 19:40:21.271470 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:40:21 crc kubenswrapper[4985]: E0128 19:40:21.272199 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:40:23 crc kubenswrapper[4985]: I0128 19:40:23.805641 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7mbz" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="registry-server" probeResult="failure" output=< Jan 28 19:40:23 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:40:23 crc kubenswrapper[4985]: > Jan 28 19:40:32 crc kubenswrapper[4985]: I0128 19:40:32.264471 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:40:32 crc kubenswrapper[4985]: E0128 19:40:32.265179 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:40:32 crc kubenswrapper[4985]: I0128 19:40:32.810766 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:40:32 crc kubenswrapper[4985]: I0128 19:40:32.871646 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:40:33 crc kubenswrapper[4985]: I0128 19:40:33.053411 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7mbz"] Jan 28 19:40:34 crc kubenswrapper[4985]: I0128 19:40:34.701885 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x7mbz" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="registry-server" containerID="cri-o://dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344" gracePeriod=2 Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.286542 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.403790 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-utilities\") pod \"c8200781-f798-46b5-bebe-e2703093cc9a\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.404030 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgrr9\" (UniqueName: \"kubernetes.io/projected/c8200781-f798-46b5-bebe-e2703093cc9a-kube-api-access-xgrr9\") pod \"c8200781-f798-46b5-bebe-e2703093cc9a\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.404078 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-catalog-content\") pod \"c8200781-f798-46b5-bebe-e2703093cc9a\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.405948 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-utilities" (OuterVolumeSpecName: "utilities") pod "c8200781-f798-46b5-bebe-e2703093cc9a" (UID: "c8200781-f798-46b5-bebe-e2703093cc9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.407230 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.411490 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8200781-f798-46b5-bebe-e2703093cc9a-kube-api-access-xgrr9" (OuterVolumeSpecName: "kube-api-access-xgrr9") pod "c8200781-f798-46b5-bebe-e2703093cc9a" (UID: "c8200781-f798-46b5-bebe-e2703093cc9a"). InnerVolumeSpecName "kube-api-access-xgrr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.509069 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgrr9\" (UniqueName: \"kubernetes.io/projected/c8200781-f798-46b5-bebe-e2703093cc9a-kube-api-access-xgrr9\") on node \"crc\" DevicePath \"\"" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.553764 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8200781-f798-46b5-bebe-e2703093cc9a" (UID: "c8200781-f798-46b5-bebe-e2703093cc9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.611986 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.712766 4985 generic.go:334] "Generic (PLEG): container finished" podID="c8200781-f798-46b5-bebe-e2703093cc9a" containerID="dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344" exitCode=0 Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.712813 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerDied","Data":"dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344"} Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.712831 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.712863 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerDied","Data":"c060460c2a544abd567f16e8afbf161a027f4391d1a578f020dab8a0d2e7a75e"} Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.712884 4985 scope.go:117] "RemoveContainer" containerID="dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.746083 4985 scope.go:117] "RemoveContainer" containerID="9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.769017 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7mbz"] Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.782986 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x7mbz"] Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.784475 4985 scope.go:117] "RemoveContainer" containerID="f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.832623 4985 scope.go:117] "RemoveContainer" containerID="dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344" Jan 28 19:40:35 crc kubenswrapper[4985]: E0128 19:40:35.833046 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344\": container with ID starting with dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344 not found: ID does not exist" containerID="dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.833082 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344"} err="failed to get container status \"dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344\": rpc error: code = NotFound desc = could not find container \"dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344\": container with ID starting with dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344 not found: ID does not exist" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.833106 4985 scope.go:117] "RemoveContainer" containerID="9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e" Jan 28 19:40:35 crc kubenswrapper[4985]: E0128 19:40:35.833701 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e\": container with ID starting with 9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e not found: ID does not exist" containerID="9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.833790 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e"} err="failed to get container status \"9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e\": rpc error: code = NotFound desc = could not find container \"9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e\": container with ID starting with 9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e not found: ID does not exist" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.833818 4985 scope.go:117] "RemoveContainer" containerID="f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7" Jan 28 19:40:35 crc kubenswrapper[4985]: E0128 19:40:35.834192 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7\": container with ID starting with f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7 not found: ID does not exist" containerID="f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.834207 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7"} err="failed to get container status \"f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7\": rpc error: code = NotFound desc = could not find container \"f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7\": container with ID starting with f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7 not found: ID does not exist" Jan 28 19:40:37 crc kubenswrapper[4985]: I0128 19:40:37.279913 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" path="/var/lib/kubelet/pods/c8200781-f798-46b5-bebe-e2703093cc9a/volumes" Jan 28 19:40:43 crc kubenswrapper[4985]: I0128 19:40:43.266531 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:40:43 crc kubenswrapper[4985]: E0128 19:40:43.267964 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:40:57 crc kubenswrapper[4985]: I0128 19:40:57.264626 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:40:57 crc kubenswrapper[4985]: E0128 19:40:57.265369 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:41:11 crc kubenswrapper[4985]: I0128 19:41:11.277764 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:41:11 crc kubenswrapper[4985]: E0128 19:41:11.278800 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:41:22 crc kubenswrapper[4985]: I0128 19:41:22.264445 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:41:22 crc kubenswrapper[4985]: E0128 19:41:22.265295 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.958938 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9c226"] Jan 28 19:41:26 crc kubenswrapper[4985]: E0128 19:41:26.960392 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="extract-content" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.960415 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="extract-content" Jan 28 19:41:26 crc kubenswrapper[4985]: E0128 19:41:26.960494 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="registry-server" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.960510 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="registry-server" Jan 28 19:41:26 crc kubenswrapper[4985]: E0128 19:41:26.960532 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="extract-utilities" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.960543 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="extract-utilities" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.961011 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="registry-server" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.963579 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.973247 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c226"] Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.095787 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg2jn\" (UniqueName: \"kubernetes.io/projected/0c879773-1159-4057-9025-6b6903d4dddc-kube-api-access-tg2jn\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.095948 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-catalog-content\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.096110 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-utilities\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.199432 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg2jn\" (UniqueName: \"kubernetes.io/projected/0c879773-1159-4057-9025-6b6903d4dddc-kube-api-access-tg2jn\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.199519 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-catalog-content\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.199613 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-utilities\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.200260 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-utilities\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.200372 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-catalog-content\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.220348 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg2jn\" (UniqueName: \"kubernetes.io/projected/0c879773-1159-4057-9025-6b6903d4dddc-kube-api-access-tg2jn\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.305228 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.799291 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c226"] Jan 28 19:41:28 crc kubenswrapper[4985]: I0128 19:41:28.373424 4985 generic.go:334] "Generic (PLEG): container finished" podID="0c879773-1159-4057-9025-6b6903d4dddc" containerID="f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12" exitCode=0 Jan 28 19:41:28 crc kubenswrapper[4985]: I0128 19:41:28.374026 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c226" event={"ID":"0c879773-1159-4057-9025-6b6903d4dddc","Type":"ContainerDied","Data":"f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12"} Jan 28 19:41:28 crc kubenswrapper[4985]: I0128 19:41:28.374068 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c226" event={"ID":"0c879773-1159-4057-9025-6b6903d4dddc","Type":"ContainerStarted","Data":"e5889886528cc2c62aea92e10443213f833c70743da932f4366fbe7ae812ac86"} Jan 28 19:41:30 crc kubenswrapper[4985]: I0128 19:41:30.420763 4985 generic.go:334] "Generic (PLEG): container finished" podID="0c879773-1159-4057-9025-6b6903d4dddc" containerID="568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1" exitCode=0 Jan 28 19:41:30 crc kubenswrapper[4985]: I0128 19:41:30.420837 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c226" event={"ID":"0c879773-1159-4057-9025-6b6903d4dddc","Type":"ContainerDied","Data":"568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1"} Jan 28 19:41:31 crc kubenswrapper[4985]: I0128 19:41:31.438773 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c226" event={"ID":"0c879773-1159-4057-9025-6b6903d4dddc","Type":"ContainerStarted","Data":"65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035"} Jan 28 19:41:31 crc kubenswrapper[4985]: I0128 19:41:31.466370 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9c226" podStartSLOduration=3.011869414 podStartE2EDuration="5.466341302s" podCreationTimestamp="2026-01-28 19:41:26 +0000 UTC" firstStartedPulling="2026-01-28 19:41:28.376162296 +0000 UTC m=+5299.202725117" lastFinishedPulling="2026-01-28 19:41:30.830634164 +0000 UTC m=+5301.657197005" observedRunningTime="2026-01-28 19:41:31.457344097 +0000 UTC m=+5302.283906988" watchObservedRunningTime="2026-01-28 19:41:31.466341302 +0000 UTC m=+5302.292904153" Jan 28 19:41:36 crc kubenswrapper[4985]: I0128 19:41:36.265341 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:41:36 crc kubenswrapper[4985]: E0128 19:41:36.266357 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:41:37 crc kubenswrapper[4985]: I0128 19:41:37.305421 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:37 crc kubenswrapper[4985]: I0128 19:41:37.305745 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:37 crc kubenswrapper[4985]: I0128 19:41:37.385089 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:37 crc kubenswrapper[4985]: I0128 19:41:37.550418 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:37 crc kubenswrapper[4985]: I0128 19:41:37.634833 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c226"] Jan 28 19:41:39 crc kubenswrapper[4985]: I0128 19:41:39.523769 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9c226" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="registry-server" containerID="cri-o://65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035" gracePeriod=2 Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.093988 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.227612 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg2jn\" (UniqueName: \"kubernetes.io/projected/0c879773-1159-4057-9025-6b6903d4dddc-kube-api-access-tg2jn\") pod \"0c879773-1159-4057-9025-6b6903d4dddc\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.227696 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-catalog-content\") pod \"0c879773-1159-4057-9025-6b6903d4dddc\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.227742 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-utilities\") pod \"0c879773-1159-4057-9025-6b6903d4dddc\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.228744 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-utilities" (OuterVolumeSpecName: "utilities") pod "0c879773-1159-4057-9025-6b6903d4dddc" (UID: "0c879773-1159-4057-9025-6b6903d4dddc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.235631 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c879773-1159-4057-9025-6b6903d4dddc-kube-api-access-tg2jn" (OuterVolumeSpecName: "kube-api-access-tg2jn") pod "0c879773-1159-4057-9025-6b6903d4dddc" (UID: "0c879773-1159-4057-9025-6b6903d4dddc"). InnerVolumeSpecName "kube-api-access-tg2jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.329627 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg2jn\" (UniqueName: \"kubernetes.io/projected/0c879773-1159-4057-9025-6b6903d4dddc-kube-api-access-tg2jn\") on node \"crc\" DevicePath \"\"" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.329971 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.340994 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c879773-1159-4057-9025-6b6903d4dddc" (UID: "0c879773-1159-4057-9025-6b6903d4dddc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.431661 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.535528 4985 generic.go:334] "Generic (PLEG): container finished" podID="0c879773-1159-4057-9025-6b6903d4dddc" containerID="65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035" exitCode=0 Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.535574 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c226" event={"ID":"0c879773-1159-4057-9025-6b6903d4dddc","Type":"ContainerDied","Data":"65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035"} Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.535603 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c226" event={"ID":"0c879773-1159-4057-9025-6b6903d4dddc","Type":"ContainerDied","Data":"e5889886528cc2c62aea92e10443213f833c70743da932f4366fbe7ae812ac86"} Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.535624 4985 scope.go:117] "RemoveContainer" containerID="65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.535775 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.570433 4985 scope.go:117] "RemoveContainer" containerID="568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.600573 4985 scope.go:117] "RemoveContainer" containerID="f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.600763 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c226"] Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.610791 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c226"] Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.659824 4985 scope.go:117] "RemoveContainer" containerID="65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035" Jan 28 19:41:40 crc kubenswrapper[4985]: E0128 19:41:40.660625 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035\": container with ID starting with 65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035 not found: ID does not exist" containerID="65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.660675 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035"} err="failed to get container status \"65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035\": rpc error: code = NotFound desc = could not find container \"65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035\": container with ID starting with 65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035 not found: ID does not exist" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.660699 4985 scope.go:117] "RemoveContainer" containerID="568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1" Jan 28 19:41:40 crc kubenswrapper[4985]: E0128 19:41:40.661285 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1\": container with ID starting with 568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1 not found: ID does not exist" containerID="568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.661339 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1"} err="failed to get container status \"568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1\": rpc error: code = NotFound desc = could not find container \"568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1\": container with ID starting with 568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1 not found: ID does not exist" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.661371 4985 scope.go:117] "RemoveContainer" containerID="f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12" Jan 28 19:41:40 crc kubenswrapper[4985]: E0128 19:41:40.661640 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12\": container with ID starting with f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12 not found: ID does not exist" containerID="f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.661711 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12"} err="failed to get container status \"f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12\": rpc error: code = NotFound desc = could not find container \"f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12\": container with ID starting with f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12 not found: ID does not exist" Jan 28 19:41:41 crc kubenswrapper[4985]: I0128 19:41:41.288749 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c879773-1159-4057-9025-6b6903d4dddc" path="/var/lib/kubelet/pods/0c879773-1159-4057-9025-6b6903d4dddc/volumes" Jan 28 19:41:48 crc kubenswrapper[4985]: I0128 19:41:48.265730 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:41:48 crc kubenswrapper[4985]: I0128 19:41:48.663541 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"a58511ae9f9eb92282ccb7faeceba6f13dffb55230695606dbf4a2da5b886b0d"} Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.703661 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8kdfx"] Jan 28 19:43:17 crc kubenswrapper[4985]: E0128 19:43:17.704818 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="extract-utilities" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.704840 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="extract-utilities" Jan 28 19:43:17 crc kubenswrapper[4985]: E0128 19:43:17.704858 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="registry-server" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.704866 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="registry-server" Jan 28 19:43:17 crc kubenswrapper[4985]: E0128 19:43:17.704887 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="extract-content" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.704900 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="extract-content" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.705542 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="registry-server" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.709430 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.720953 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-catalog-content\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.721311 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28p2j\" (UniqueName: \"kubernetes.io/projected/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-kube-api-access-28p2j\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.721935 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kdfx"] Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.722854 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-utilities\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.825517 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-utilities\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.825623 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-catalog-content\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.825762 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28p2j\" (UniqueName: \"kubernetes.io/projected/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-kube-api-access-28p2j\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.826640 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-catalog-content\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.826733 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-utilities\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.887535 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28p2j\" (UniqueName: \"kubernetes.io/projected/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-kube-api-access-28p2j\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:18 crc kubenswrapper[4985]: I0128 19:43:18.038460 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:18 crc kubenswrapper[4985]: I0128 19:43:18.576697 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kdfx"] Jan 28 19:43:18 crc kubenswrapper[4985]: I0128 19:43:18.917662 4985 generic.go:334] "Generic (PLEG): container finished" podID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerID="53cf309ed2f40b50e2f28902eb3196f215436b1a3ae84c2cb6de2fc4f4e68e70" exitCode=0 Jan 28 19:43:18 crc kubenswrapper[4985]: I0128 19:43:18.917717 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerDied","Data":"53cf309ed2f40b50e2f28902eb3196f215436b1a3ae84c2cb6de2fc4f4e68e70"} Jan 28 19:43:18 crc kubenswrapper[4985]: I0128 19:43:18.917941 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerStarted","Data":"4b1bcbca2155115d965173a4aa8738794325cf386b7456e68f57d25f66a42f5b"} Jan 28 19:43:18 crc kubenswrapper[4985]: I0128 19:43:18.919960 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:43:19 crc kubenswrapper[4985]: I0128 19:43:19.930735 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerStarted","Data":"5342aa70327afa7d8c40c750487da67a40528b4412aa1733a682408974326cc5"} Jan 28 19:43:22 crc kubenswrapper[4985]: I0128 19:43:22.971622 4985 generic.go:334] "Generic (PLEG): container finished" podID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerID="5342aa70327afa7d8c40c750487da67a40528b4412aa1733a682408974326cc5" exitCode=0 Jan 28 19:43:22 crc kubenswrapper[4985]: I0128 19:43:22.971713 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerDied","Data":"5342aa70327afa7d8c40c750487da67a40528b4412aa1733a682408974326cc5"} Jan 28 19:43:26 crc kubenswrapper[4985]: I0128 19:43:26.003333 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerStarted","Data":"516a0a94489f3eda26f25eb3e6179077c3ae29f6ff3f349e8e7123d4ec5356bb"} Jan 28 19:43:26 crc kubenswrapper[4985]: I0128 19:43:26.031620 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8kdfx" podStartSLOduration=2.494810002 podStartE2EDuration="9.031598374s" podCreationTimestamp="2026-01-28 19:43:17 +0000 UTC" firstStartedPulling="2026-01-28 19:43:18.919586097 +0000 UTC m=+5409.746148928" lastFinishedPulling="2026-01-28 19:43:25.456374439 +0000 UTC m=+5416.282937300" observedRunningTime="2026-01-28 19:43:26.021709974 +0000 UTC m=+5416.848272795" watchObservedRunningTime="2026-01-28 19:43:26.031598374 +0000 UTC m=+5416.858161205" Jan 28 19:43:28 crc kubenswrapper[4985]: I0128 19:43:28.055763 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:28 crc kubenswrapper[4985]: I0128 19:43:28.058601 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:28 crc kubenswrapper[4985]: I0128 19:43:28.126533 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:30 crc kubenswrapper[4985]: I0128 19:43:30.166971 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:30 crc kubenswrapper[4985]: I0128 19:43:30.227097 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8kdfx"] Jan 28 19:43:32 crc kubenswrapper[4985]: I0128 19:43:32.116124 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8kdfx" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="registry-server" containerID="cri-o://516a0a94489f3eda26f25eb3e6179077c3ae29f6ff3f349e8e7123d4ec5356bb" gracePeriod=2 Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.134926 4985 generic.go:334] "Generic (PLEG): container finished" podID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerID="516a0a94489f3eda26f25eb3e6179077c3ae29f6ff3f349e8e7123d4ec5356bb" exitCode=0 Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.135119 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerDied","Data":"516a0a94489f3eda26f25eb3e6179077c3ae29f6ff3f349e8e7123d4ec5356bb"} Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.337435 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.357496 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-utilities\") pod \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.357648 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28p2j\" (UniqueName: \"kubernetes.io/projected/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-kube-api-access-28p2j\") pod \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.357746 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-catalog-content\") pod \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.359808 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-utilities" (OuterVolumeSpecName: "utilities") pod "1bd75f3d-baf4-4a14-bf0a-182f76c18de8" (UID: "1bd75f3d-baf4-4a14-bf0a-182f76c18de8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.382472 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-kube-api-access-28p2j" (OuterVolumeSpecName: "kube-api-access-28p2j") pod "1bd75f3d-baf4-4a14-bf0a-182f76c18de8" (UID: "1bd75f3d-baf4-4a14-bf0a-182f76c18de8"). InnerVolumeSpecName "kube-api-access-28p2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.437339 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bd75f3d-baf4-4a14-bf0a-182f76c18de8" (UID: "1bd75f3d-baf4-4a14-bf0a-182f76c18de8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.461664 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.461698 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28p2j\" (UniqueName: \"kubernetes.io/projected/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-kube-api-access-28p2j\") on node \"crc\" DevicePath \"\"" Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.461708 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.151743 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerDied","Data":"4b1bcbca2155115d965173a4aa8738794325cf386b7456e68f57d25f66a42f5b"} Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.152148 4985 scope.go:117] "RemoveContainer" containerID="516a0a94489f3eda26f25eb3e6179077c3ae29f6ff3f349e8e7123d4ec5356bb" Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.151878 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.190029 4985 scope.go:117] "RemoveContainer" containerID="5342aa70327afa7d8c40c750487da67a40528b4412aa1733a682408974326cc5" Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.234010 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8kdfx"] Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.243862 4985 scope.go:117] "RemoveContainer" containerID="53cf309ed2f40b50e2f28902eb3196f215436b1a3ae84c2cb6de2fc4f4e68e70" Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.252145 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8kdfx"] Jan 28 19:43:35 crc kubenswrapper[4985]: I0128 19:43:35.290556 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" path="/var/lib/kubelet/pods/1bd75f3d-baf4-4a14-bf0a-182f76c18de8/volumes" Jan 28 19:44:11 crc kubenswrapper[4985]: I0128 19:44:11.186165 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:44:11 crc kubenswrapper[4985]: I0128 19:44:11.186849 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:44:41 crc kubenswrapper[4985]: I0128 19:44:41.186572 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:44:41 crc kubenswrapper[4985]: I0128 19:44:41.187356 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.151961 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9"] Jan 28 19:45:00 crc kubenswrapper[4985]: E0128 19:45:00.152958 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="registry-server" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.152971 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="registry-server" Jan 28 19:45:00 crc kubenswrapper[4985]: E0128 19:45:00.152989 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="extract-utilities" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.152997 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="extract-utilities" Jan 28 19:45:00 crc kubenswrapper[4985]: E0128 19:45:00.153039 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="extract-content" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.153045 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="extract-content" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.153279 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="registry-server" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.154011 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.156678 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.156893 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.166947 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9"] Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.195757 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8bxc\" (UniqueName: \"kubernetes.io/projected/73b1d5c3-055f-41c9-aae7-f397142ddf05-kube-api-access-b8bxc\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.195850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b1d5c3-055f-41c9-aae7-f397142ddf05-config-volume\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.195892 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73b1d5c3-055f-41c9-aae7-f397142ddf05-secret-volume\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.298908 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b1d5c3-055f-41c9-aae7-f397142ddf05-config-volume\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.299017 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73b1d5c3-055f-41c9-aae7-f397142ddf05-secret-volume\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.299278 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8bxc\" (UniqueName: \"kubernetes.io/projected/73b1d5c3-055f-41c9-aae7-f397142ddf05-kube-api-access-b8bxc\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.300495 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b1d5c3-055f-41c9-aae7-f397142ddf05-config-volume\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.311110 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73b1d5c3-055f-41c9-aae7-f397142ddf05-secret-volume\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.333629 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8bxc\" (UniqueName: \"kubernetes.io/projected/73b1d5c3-055f-41c9-aae7-f397142ddf05-kube-api-access-b8bxc\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.475484 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:01 crc kubenswrapper[4985]: I0128 19:45:01.011949 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9"] Jan 28 19:45:01 crc kubenswrapper[4985]: W0128 19:45:01.012855 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73b1d5c3_055f_41c9_aae7_f397142ddf05.slice/crio-b5ec9e22a34d8fb7f9784556db1f19a5b3e065c5522fbe347628ef3fdba9655c WatchSource:0}: Error finding container b5ec9e22a34d8fb7f9784556db1f19a5b3e065c5522fbe347628ef3fdba9655c: Status 404 returned error can't find the container with id b5ec9e22a34d8fb7f9784556db1f19a5b3e065c5522fbe347628ef3fdba9655c Jan 28 19:45:01 crc kubenswrapper[4985]: I0128 19:45:01.223888 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" event={"ID":"73b1d5c3-055f-41c9-aae7-f397142ddf05","Type":"ContainerStarted","Data":"db2846bf6da7236873840864c39f40024962e4f67507dcec60b63c320c36883d"} Jan 28 19:45:01 crc kubenswrapper[4985]: I0128 19:45:01.224190 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" event={"ID":"73b1d5c3-055f-41c9-aae7-f397142ddf05","Type":"ContainerStarted","Data":"b5ec9e22a34d8fb7f9784556db1f19a5b3e065c5522fbe347628ef3fdba9655c"} Jan 28 19:45:02 crc kubenswrapper[4985]: I0128 19:45:02.256170 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" podStartSLOduration=2.256147093 podStartE2EDuration="2.256147093s" podCreationTimestamp="2026-01-28 19:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:45:02.251132941 +0000 UTC m=+5513.077695762" watchObservedRunningTime="2026-01-28 19:45:02.256147093 +0000 UTC m=+5513.082709914" Jan 28 19:45:03 crc kubenswrapper[4985]: I0128 19:45:03.285303 4985 generic.go:334] "Generic (PLEG): container finished" podID="73b1d5c3-055f-41c9-aae7-f397142ddf05" containerID="db2846bf6da7236873840864c39f40024962e4f67507dcec60b63c320c36883d" exitCode=0 Jan 28 19:45:03 crc kubenswrapper[4985]: I0128 19:45:03.289019 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" event={"ID":"73b1d5c3-055f-41c9-aae7-f397142ddf05","Type":"ContainerDied","Data":"db2846bf6da7236873840864c39f40024962e4f67507dcec60b63c320c36883d"} Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.753197 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.843723 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8bxc\" (UniqueName: \"kubernetes.io/projected/73b1d5c3-055f-41c9-aae7-f397142ddf05-kube-api-access-b8bxc\") pod \"73b1d5c3-055f-41c9-aae7-f397142ddf05\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.843954 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73b1d5c3-055f-41c9-aae7-f397142ddf05-secret-volume\") pod \"73b1d5c3-055f-41c9-aae7-f397142ddf05\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.844104 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b1d5c3-055f-41c9-aae7-f397142ddf05-config-volume\") pod \"73b1d5c3-055f-41c9-aae7-f397142ddf05\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.844962 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b1d5c3-055f-41c9-aae7-f397142ddf05-config-volume" (OuterVolumeSpecName: "config-volume") pod "73b1d5c3-055f-41c9-aae7-f397142ddf05" (UID: "73b1d5c3-055f-41c9-aae7-f397142ddf05"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.845613 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b1d5c3-055f-41c9-aae7-f397142ddf05-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.850896 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b1d5c3-055f-41c9-aae7-f397142ddf05-kube-api-access-b8bxc" (OuterVolumeSpecName: "kube-api-access-b8bxc") pod "73b1d5c3-055f-41c9-aae7-f397142ddf05" (UID: "73b1d5c3-055f-41c9-aae7-f397142ddf05"). InnerVolumeSpecName "kube-api-access-b8bxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.864983 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73b1d5c3-055f-41c9-aae7-f397142ddf05-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "73b1d5c3-055f-41c9-aae7-f397142ddf05" (UID: "73b1d5c3-055f-41c9-aae7-f397142ddf05"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.947669 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8bxc\" (UniqueName: \"kubernetes.io/projected/73b1d5c3-055f-41c9-aae7-f397142ddf05-kube-api-access-b8bxc\") on node \"crc\" DevicePath \"\"" Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.947970 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73b1d5c3-055f-41c9-aae7-f397142ddf05-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:45:05 crc kubenswrapper[4985]: I0128 19:45:05.312769 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" event={"ID":"73b1d5c3-055f-41c9-aae7-f397142ddf05","Type":"ContainerDied","Data":"b5ec9e22a34d8fb7f9784556db1f19a5b3e065c5522fbe347628ef3fdba9655c"} Jan 28 19:45:05 crc kubenswrapper[4985]: I0128 19:45:05.313066 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5ec9e22a34d8fb7f9784556db1f19a5b3e065c5522fbe347628ef3fdba9655c" Jan 28 19:45:05 crc kubenswrapper[4985]: I0128 19:45:05.312826 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:05 crc kubenswrapper[4985]: I0128 19:45:05.880116 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw"] Jan 28 19:45:05 crc kubenswrapper[4985]: I0128 19:45:05.894309 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw"] Jan 28 19:45:07 crc kubenswrapper[4985]: I0128 19:45:07.279308 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" path="/var/lib/kubelet/pods/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1/volumes" Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.186879 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.189928 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.190232 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.192078 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a58511ae9f9eb92282ccb7faeceba6f13dffb55230695606dbf4a2da5b886b0d"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.192459 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://a58511ae9f9eb92282ccb7faeceba6f13dffb55230695606dbf4a2da5b886b0d" gracePeriod=600 Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.381757 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="a58511ae9f9eb92282ccb7faeceba6f13dffb55230695606dbf4a2da5b886b0d" exitCode=0 Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.381814 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"a58511ae9f9eb92282ccb7faeceba6f13dffb55230695606dbf4a2da5b886b0d"} Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.382151 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:45:12 crc kubenswrapper[4985]: I0128 19:45:12.396708 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c"} Jan 28 19:45:16 crc kubenswrapper[4985]: E0128 19:45:16.734825 4985 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.195:56152->38.102.83.195:43365: write tcp 38.102.83.195:56152->38.102.83.195:43365: write: broken pipe Jan 28 19:45:43 crc kubenswrapper[4985]: I0128 19:45:43.275089 4985 scope.go:117] "RemoveContainer" containerID="fc36e8e83ce2dcdbad3b7ac3097968106477e97a9a58431ad0304a2bcaebdce7" Jan 28 19:46:57 crc kubenswrapper[4985]: E0128 19:46:57.488853 4985 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.195:45024->38.102.83.195:43365: write tcp 38.102.83.195:45024->38.102.83.195:43365: write: broken pipe Jan 28 19:47:11 crc kubenswrapper[4985]: I0128 19:47:11.186092 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:47:11 crc kubenswrapper[4985]: I0128 19:47:11.187352 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:47:41 crc kubenswrapper[4985]: I0128 19:47:41.186029 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:47:41 crc kubenswrapper[4985]: I0128 19:47:41.186626 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.186019 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.186655 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.186723 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.188096 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.188210 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" gracePeriod=600 Jan 28 19:48:11 crc kubenswrapper[4985]: E0128 19:48:11.308325 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.653013 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" exitCode=0 Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.653309 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c"} Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.653347 4985 scope.go:117] "RemoveContainer" containerID="a58511ae9f9eb92282ccb7faeceba6f13dffb55230695606dbf4a2da5b886b0d" Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.654203 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:48:11 crc kubenswrapper[4985]: E0128 19:48:11.654558 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:48:25 crc kubenswrapper[4985]: I0128 19:48:25.264049 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:48:25 crc kubenswrapper[4985]: E0128 19:48:25.264945 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:48:36 crc kubenswrapper[4985]: I0128 19:48:36.265076 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:48:36 crc kubenswrapper[4985]: E0128 19:48:36.266035 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:48:50 crc kubenswrapper[4985]: I0128 19:48:50.263844 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:48:50 crc kubenswrapper[4985]: E0128 19:48:50.264923 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.042831 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xrfq5"] Jan 28 19:49:01 crc kubenswrapper[4985]: E0128 19:49:01.043979 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b1d5c3-055f-41c9-aae7-f397142ddf05" containerName="collect-profiles" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.043992 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b1d5c3-055f-41c9-aae7-f397142ddf05" containerName="collect-profiles" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.044268 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b1d5c3-055f-41c9-aae7-f397142ddf05" containerName="collect-profiles" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.047285 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrfq5"] Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.047372 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.099685 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-utilities\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.099755 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v94n\" (UniqueName: \"kubernetes.io/projected/f40cb468-52d9-418f-ae6e-f1262531b85a-kube-api-access-6v94n\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.099963 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-catalog-content\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.201974 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-utilities\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.202059 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v94n\" (UniqueName: \"kubernetes.io/projected/f40cb468-52d9-418f-ae6e-f1262531b85a-kube-api-access-6v94n\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.202234 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-catalog-content\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.202651 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-catalog-content\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.202763 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-utilities\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.225559 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v94n\" (UniqueName: \"kubernetes.io/projected/f40cb468-52d9-418f-ae6e-f1262531b85a-kube-api-access-6v94n\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.381849 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.964964 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrfq5"] Jan 28 19:49:02 crc kubenswrapper[4985]: I0128 19:49:02.432765 4985 generic.go:334] "Generic (PLEG): container finished" podID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerID="669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f" exitCode=0 Jan 28 19:49:02 crc kubenswrapper[4985]: I0128 19:49:02.432856 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerDied","Data":"669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f"} Jan 28 19:49:02 crc kubenswrapper[4985]: I0128 19:49:02.433108 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerStarted","Data":"7231155381c11c3d4badfe2b0a0f3ce79d9af0ba702743f05c6d0732113049c6"} Jan 28 19:49:02 crc kubenswrapper[4985]: I0128 19:49:02.436880 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:49:03 crc kubenswrapper[4985]: I0128 19:49:03.448577 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerStarted","Data":"5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db"} Jan 28 19:49:04 crc kubenswrapper[4985]: I0128 19:49:04.264466 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:49:04 crc kubenswrapper[4985]: E0128 19:49:04.265068 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:49:06 crc kubenswrapper[4985]: I0128 19:49:06.487991 4985 generic.go:334] "Generic (PLEG): container finished" podID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerID="5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db" exitCode=0 Jan 28 19:49:06 crc kubenswrapper[4985]: I0128 19:49:06.488057 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerDied","Data":"5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db"} Jan 28 19:49:08 crc kubenswrapper[4985]: I0128 19:49:08.525608 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerStarted","Data":"79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99"} Jan 28 19:49:08 crc kubenswrapper[4985]: I0128 19:49:08.564748 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xrfq5" podStartSLOduration=4.100023969 podStartE2EDuration="8.564723289s" podCreationTimestamp="2026-01-28 19:49:00 +0000 UTC" firstStartedPulling="2026-01-28 19:49:02.436465013 +0000 UTC m=+5753.263027864" lastFinishedPulling="2026-01-28 19:49:06.901164323 +0000 UTC m=+5757.727727184" observedRunningTime="2026-01-28 19:49:08.555956411 +0000 UTC m=+5759.382519252" watchObservedRunningTime="2026-01-28 19:49:08.564723289 +0000 UTC m=+5759.391286150" Jan 28 19:49:11 crc kubenswrapper[4985]: I0128 19:49:11.382452 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:11 crc kubenswrapper[4985]: I0128 19:49:11.382854 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:12 crc kubenswrapper[4985]: I0128 19:49:12.449345 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xrfq5" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="registry-server" probeResult="failure" output=< Jan 28 19:49:12 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:49:12 crc kubenswrapper[4985]: > Jan 28 19:49:17 crc kubenswrapper[4985]: I0128 19:49:17.265080 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:49:17 crc kubenswrapper[4985]: E0128 19:49:17.266109 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:49:21 crc kubenswrapper[4985]: I0128 19:49:21.441090 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:21 crc kubenswrapper[4985]: I0128 19:49:21.500111 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:21 crc kubenswrapper[4985]: I0128 19:49:21.689245 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrfq5"] Jan 28 19:49:22 crc kubenswrapper[4985]: I0128 19:49:22.688982 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xrfq5" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="registry-server" containerID="cri-o://79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99" gracePeriod=2 Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.175045 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.234903 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v94n\" (UniqueName: \"kubernetes.io/projected/f40cb468-52d9-418f-ae6e-f1262531b85a-kube-api-access-6v94n\") pod \"f40cb468-52d9-418f-ae6e-f1262531b85a\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.235478 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-catalog-content\") pod \"f40cb468-52d9-418f-ae6e-f1262531b85a\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.235557 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-utilities\") pod \"f40cb468-52d9-418f-ae6e-f1262531b85a\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.238114 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-utilities" (OuterVolumeSpecName: "utilities") pod "f40cb468-52d9-418f-ae6e-f1262531b85a" (UID: "f40cb468-52d9-418f-ae6e-f1262531b85a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.252797 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40cb468-52d9-418f-ae6e-f1262531b85a-kube-api-access-6v94n" (OuterVolumeSpecName: "kube-api-access-6v94n") pod "f40cb468-52d9-418f-ae6e-f1262531b85a" (UID: "f40cb468-52d9-418f-ae6e-f1262531b85a"). InnerVolumeSpecName "kube-api-access-6v94n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.308472 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f40cb468-52d9-418f-ae6e-f1262531b85a" (UID: "f40cb468-52d9-418f-ae6e-f1262531b85a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.340549 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.340581 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v94n\" (UniqueName: \"kubernetes.io/projected/f40cb468-52d9-418f-ae6e-f1262531b85a-kube-api-access-6v94n\") on node \"crc\" DevicePath \"\"" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.340592 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.705631 4985 generic.go:334] "Generic (PLEG): container finished" podID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerID="79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99" exitCode=0 Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.705688 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerDied","Data":"79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99"} Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.705719 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerDied","Data":"7231155381c11c3d4badfe2b0a0f3ce79d9af0ba702743f05c6d0732113049c6"} Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.705727 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.705741 4985 scope.go:117] "RemoveContainer" containerID="79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.731905 4985 scope.go:117] "RemoveContainer" containerID="5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.763162 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrfq5"] Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.777787 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xrfq5"] Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.785636 4985 scope.go:117] "RemoveContainer" containerID="669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.840273 4985 scope.go:117] "RemoveContainer" containerID="79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99" Jan 28 19:49:23 crc kubenswrapper[4985]: E0128 19:49:23.840828 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99\": container with ID starting with 79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99 not found: ID does not exist" containerID="79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.840889 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99"} err="failed to get container status \"79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99\": rpc error: code = NotFound desc = could not find container \"79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99\": container with ID starting with 79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99 not found: ID does not exist" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.840915 4985 scope.go:117] "RemoveContainer" containerID="5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db" Jan 28 19:49:23 crc kubenswrapper[4985]: E0128 19:49:23.841324 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db\": container with ID starting with 5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db not found: ID does not exist" containerID="5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.841394 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db"} err="failed to get container status \"5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db\": rpc error: code = NotFound desc = could not find container \"5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db\": container with ID starting with 5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db not found: ID does not exist" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.841436 4985 scope.go:117] "RemoveContainer" containerID="669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f" Jan 28 19:49:23 crc kubenswrapper[4985]: E0128 19:49:23.841763 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f\": container with ID starting with 669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f not found: ID does not exist" containerID="669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.841799 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f"} err="failed to get container status \"669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f\": rpc error: code = NotFound desc = could not find container \"669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f\": container with ID starting with 669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f not found: ID does not exist" Jan 28 19:49:25 crc kubenswrapper[4985]: I0128 19:49:25.284221 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" path="/var/lib/kubelet/pods/f40cb468-52d9-418f-ae6e-f1262531b85a/volumes" Jan 28 19:49:32 crc kubenswrapper[4985]: I0128 19:49:32.264420 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:49:32 crc kubenswrapper[4985]: E0128 19:49:32.266274 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:49:46 crc kubenswrapper[4985]: I0128 19:49:46.264796 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:49:46 crc kubenswrapper[4985]: E0128 19:49:46.265591 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:50:00 crc kubenswrapper[4985]: I0128 19:50:00.265582 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:50:00 crc kubenswrapper[4985]: E0128 19:50:00.268134 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:50:12 crc kubenswrapper[4985]: I0128 19:50:12.264197 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:50:12 crc kubenswrapper[4985]: E0128 19:50:12.265000 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:50:25 crc kubenswrapper[4985]: I0128 19:50:25.270340 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:50:25 crc kubenswrapper[4985]: E0128 19:50:25.271394 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.889877 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5lfg8"] Jan 28 19:50:26 crc kubenswrapper[4985]: E0128 19:50:26.890936 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="extract-content" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.890955 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="extract-content" Jan 28 19:50:26 crc kubenswrapper[4985]: E0128 19:50:26.891011 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="extract-utilities" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.891023 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="extract-utilities" Jan 28 19:50:26 crc kubenswrapper[4985]: E0128 19:50:26.891040 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="registry-server" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.891048 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="registry-server" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.891411 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="registry-server" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.897681 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.903427 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5lfg8"] Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.005741 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-utilities\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.006077 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-catalog-content\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.006101 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knb42\" (UniqueName: \"kubernetes.io/projected/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-kube-api-access-knb42\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.107732 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-catalog-content\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.107778 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knb42\" (UniqueName: \"kubernetes.io/projected/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-kube-api-access-knb42\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.107949 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-utilities\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.108334 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-catalog-content\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.108435 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-utilities\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.730902 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knb42\" (UniqueName: \"kubernetes.io/projected/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-kube-api-access-knb42\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.825977 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:28 crc kubenswrapper[4985]: I0128 19:50:28.348492 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5lfg8"] Jan 28 19:50:28 crc kubenswrapper[4985]: I0128 19:50:28.507507 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerStarted","Data":"47e61066b587de4bdb4d330875bee6c011e7fa07480ad9c2d8f5468abae1466f"} Jan 28 19:50:29 crc kubenswrapper[4985]: I0128 19:50:29.517524 4985 generic.go:334] "Generic (PLEG): container finished" podID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerID="e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc" exitCode=0 Jan 28 19:50:29 crc kubenswrapper[4985]: I0128 19:50:29.517770 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerDied","Data":"e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc"} Jan 28 19:50:30 crc kubenswrapper[4985]: I0128 19:50:30.548475 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerStarted","Data":"424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070"} Jan 28 19:50:37 crc kubenswrapper[4985]: I0128 19:50:37.264513 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:50:37 crc kubenswrapper[4985]: E0128 19:50:37.265423 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:50:38 crc kubenswrapper[4985]: I0128 19:50:38.532981 4985 generic.go:334] "Generic (PLEG): container finished" podID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerID="424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070" exitCode=0 Jan 28 19:50:38 crc kubenswrapper[4985]: I0128 19:50:38.533312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerDied","Data":"424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070"} Jan 28 19:50:40 crc kubenswrapper[4985]: I0128 19:50:40.566039 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerStarted","Data":"5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0"} Jan 28 19:50:40 crc kubenswrapper[4985]: I0128 19:50:40.604988 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5lfg8" podStartSLOduration=5.158449258 podStartE2EDuration="14.604963937s" podCreationTimestamp="2026-01-28 19:50:26 +0000 UTC" firstStartedPulling="2026-01-28 19:50:29.520049894 +0000 UTC m=+5840.346612715" lastFinishedPulling="2026-01-28 19:50:38.966564533 +0000 UTC m=+5849.793127394" observedRunningTime="2026-01-28 19:50:40.592101373 +0000 UTC m=+5851.418664234" watchObservedRunningTime="2026-01-28 19:50:40.604963937 +0000 UTC m=+5851.431526758" Jan 28 19:50:47 crc kubenswrapper[4985]: I0128 19:50:47.826230 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:47 crc kubenswrapper[4985]: I0128 19:50:47.826875 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:48 crc kubenswrapper[4985]: I0128 19:50:48.895741 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5lfg8" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="registry-server" probeResult="failure" output=< Jan 28 19:50:48 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:50:48 crc kubenswrapper[4985]: > Jan 28 19:50:49 crc kubenswrapper[4985]: I0128 19:50:49.264831 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:50:49 crc kubenswrapper[4985]: E0128 19:50:49.265590 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:50:58 crc kubenswrapper[4985]: I0128 19:50:58.899791 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5lfg8" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="registry-server" probeResult="failure" output=< Jan 28 19:50:58 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:50:58 crc kubenswrapper[4985]: > Jan 28 19:51:00 crc kubenswrapper[4985]: I0128 19:51:00.264804 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:51:00 crc kubenswrapper[4985]: E0128 19:51:00.266190 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:51:07 crc kubenswrapper[4985]: I0128 19:51:07.918628 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:51:08 crc kubenswrapper[4985]: I0128 19:51:08.006129 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:51:08 crc kubenswrapper[4985]: I0128 19:51:08.171354 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5lfg8"] Jan 28 19:51:08 crc kubenswrapper[4985]: I0128 19:51:08.995793 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5lfg8" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="registry-server" containerID="cri-o://5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0" gracePeriod=2 Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.677027 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.799890 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-catalog-content\") pod \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.800024 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knb42\" (UniqueName: \"kubernetes.io/projected/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-kube-api-access-knb42\") pod \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.800136 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-utilities\") pod \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.801652 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-utilities" (OuterVolumeSpecName: "utilities") pod "7409f2a2-14dd-4bd9-9b0d-68d468d7a036" (UID: "7409f2a2-14dd-4bd9-9b0d-68d468d7a036"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.811293 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-kube-api-access-knb42" (OuterVolumeSpecName: "kube-api-access-knb42") pod "7409f2a2-14dd-4bd9-9b0d-68d468d7a036" (UID: "7409f2a2-14dd-4bd9-9b0d-68d468d7a036"). InnerVolumeSpecName "kube-api-access-knb42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.902797 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knb42\" (UniqueName: \"kubernetes.io/projected/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-kube-api-access-knb42\") on node \"crc\" DevicePath \"\"" Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.902834 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.917194 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7409f2a2-14dd-4bd9-9b0d-68d468d7a036" (UID: "7409f2a2-14dd-4bd9-9b0d-68d468d7a036"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.006116 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.014621 4985 generic.go:334] "Generic (PLEG): container finished" podID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerID="5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0" exitCode=0 Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.014678 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerDied","Data":"5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0"} Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.014746 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.014776 4985 scope.go:117] "RemoveContainer" containerID="5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.014760 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerDied","Data":"47e61066b587de4bdb4d330875bee6c011e7fa07480ad9c2d8f5468abae1466f"} Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.061304 4985 scope.go:117] "RemoveContainer" containerID="424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.093859 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5lfg8"] Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.111172 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5lfg8"] Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.120694 4985 scope.go:117] "RemoveContainer" containerID="e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.169555 4985 scope.go:117] "RemoveContainer" containerID="5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0" Jan 28 19:51:10 crc kubenswrapper[4985]: E0128 19:51:10.170126 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0\": container with ID starting with 5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0 not found: ID does not exist" containerID="5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.170171 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0"} err="failed to get container status \"5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0\": rpc error: code = NotFound desc = could not find container \"5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0\": container with ID starting with 5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0 not found: ID does not exist" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.170197 4985 scope.go:117] "RemoveContainer" containerID="424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070" Jan 28 19:51:10 crc kubenswrapper[4985]: E0128 19:51:10.170735 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070\": container with ID starting with 424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070 not found: ID does not exist" containerID="424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.170802 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070"} err="failed to get container status \"424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070\": rpc error: code = NotFound desc = could not find container \"424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070\": container with ID starting with 424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070 not found: ID does not exist" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.170840 4985 scope.go:117] "RemoveContainer" containerID="e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc" Jan 28 19:51:10 crc kubenswrapper[4985]: E0128 19:51:10.171721 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc\": container with ID starting with e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc not found: ID does not exist" containerID="e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.171758 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc"} err="failed to get container status \"e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc\": rpc error: code = NotFound desc = could not find container \"e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc\": container with ID starting with e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc not found: ID does not exist" Jan 28 19:51:11 crc kubenswrapper[4985]: I0128 19:51:11.288536 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" path="/var/lib/kubelet/pods/7409f2a2-14dd-4bd9-9b0d-68d468d7a036/volumes" Jan 28 19:51:13 crc kubenswrapper[4985]: I0128 19:51:13.263920 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:51:13 crc kubenswrapper[4985]: E0128 19:51:13.264228 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:51:28 crc kubenswrapper[4985]: I0128 19:51:28.264190 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:51:28 crc kubenswrapper[4985]: E0128 19:51:28.265117 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:51:40 crc kubenswrapper[4985]: I0128 19:51:40.264849 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:51:40 crc kubenswrapper[4985]: E0128 19:51:40.266019 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:51:52 crc kubenswrapper[4985]: I0128 19:51:52.264491 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:51:52 crc kubenswrapper[4985]: E0128 19:51:52.265573 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:52:05 crc kubenswrapper[4985]: I0128 19:52:05.264567 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:52:05 crc kubenswrapper[4985]: E0128 19:52:05.265686 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.434728 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q8696"] Jan 28 19:52:10 crc kubenswrapper[4985]: E0128 19:52:10.435572 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="registry-server" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.435582 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="registry-server" Jan 28 19:52:10 crc kubenswrapper[4985]: E0128 19:52:10.435617 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="extract-utilities" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.435623 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="extract-utilities" Jan 28 19:52:10 crc kubenswrapper[4985]: E0128 19:52:10.435640 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="extract-content" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.435646 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="extract-content" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.435851 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="registry-server" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.437408 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.439050 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8696"] Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.591561 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-catalog-content\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.591640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-utilities\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.591737 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8d7n\" (UniqueName: \"kubernetes.io/projected/ad73e021-615d-4c78-926e-af3b8812da9c-kube-api-access-m8d7n\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.694507 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8d7n\" (UniqueName: \"kubernetes.io/projected/ad73e021-615d-4c78-926e-af3b8812da9c-kube-api-access-m8d7n\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.694975 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-catalog-content\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.695160 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-utilities\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.695706 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-catalog-content\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.695792 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-utilities\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.715847 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8d7n\" (UniqueName: \"kubernetes.io/projected/ad73e021-615d-4c78-926e-af3b8812da9c-kube-api-access-m8d7n\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.781313 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:11 crc kubenswrapper[4985]: I0128 19:52:11.357141 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8696"] Jan 28 19:52:11 crc kubenswrapper[4985]: I0128 19:52:11.985005 4985 generic.go:334] "Generic (PLEG): container finished" podID="ad73e021-615d-4c78-926e-af3b8812da9c" containerID="b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a" exitCode=0 Jan 28 19:52:11 crc kubenswrapper[4985]: I0128 19:52:11.985071 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8696" event={"ID":"ad73e021-615d-4c78-926e-af3b8812da9c","Type":"ContainerDied","Data":"b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a"} Jan 28 19:52:11 crc kubenswrapper[4985]: I0128 19:52:11.985375 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8696" event={"ID":"ad73e021-615d-4c78-926e-af3b8812da9c","Type":"ContainerStarted","Data":"513bc7b6059f1e0b7811ca2a6e846ab89a1bd700812e1eb8437574fc3b92572e"} Jan 28 19:52:14 crc kubenswrapper[4985]: I0128 19:52:14.023537 4985 generic.go:334] "Generic (PLEG): container finished" podID="ad73e021-615d-4c78-926e-af3b8812da9c" containerID="763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe" exitCode=0 Jan 28 19:52:14 crc kubenswrapper[4985]: I0128 19:52:14.023605 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8696" event={"ID":"ad73e021-615d-4c78-926e-af3b8812da9c","Type":"ContainerDied","Data":"763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe"} Jan 28 19:52:15 crc kubenswrapper[4985]: I0128 19:52:15.038401 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8696" event={"ID":"ad73e021-615d-4c78-926e-af3b8812da9c","Type":"ContainerStarted","Data":"b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace"} Jan 28 19:52:15 crc kubenswrapper[4985]: I0128 19:52:15.073144 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q8696" podStartSLOduration=2.557558266 podStartE2EDuration="5.073118833s" podCreationTimestamp="2026-01-28 19:52:10 +0000 UTC" firstStartedPulling="2026-01-28 19:52:11.987386144 +0000 UTC m=+5942.813948975" lastFinishedPulling="2026-01-28 19:52:14.502946731 +0000 UTC m=+5945.329509542" observedRunningTime="2026-01-28 19:52:15.062279896 +0000 UTC m=+5945.888842757" watchObservedRunningTime="2026-01-28 19:52:15.073118833 +0000 UTC m=+5945.899681684" Jan 28 19:52:20 crc kubenswrapper[4985]: I0128 19:52:20.264145 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:52:20 crc kubenswrapper[4985]: E0128 19:52:20.265010 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:52:20 crc kubenswrapper[4985]: I0128 19:52:20.782510 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:20 crc kubenswrapper[4985]: I0128 19:52:20.782602 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:20 crc kubenswrapper[4985]: I0128 19:52:20.860132 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:21 crc kubenswrapper[4985]: I0128 19:52:21.164398 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:21 crc kubenswrapper[4985]: I0128 19:52:21.228986 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8696"] Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.128713 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q8696" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="registry-server" containerID="cri-o://b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace" gracePeriod=2 Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.679183 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.851140 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8d7n\" (UniqueName: \"kubernetes.io/projected/ad73e021-615d-4c78-926e-af3b8812da9c-kube-api-access-m8d7n\") pod \"ad73e021-615d-4c78-926e-af3b8812da9c\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.851535 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-utilities\") pod \"ad73e021-615d-4c78-926e-af3b8812da9c\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.851588 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-catalog-content\") pod \"ad73e021-615d-4c78-926e-af3b8812da9c\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.852599 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-utilities" (OuterVolumeSpecName: "utilities") pod "ad73e021-615d-4c78-926e-af3b8812da9c" (UID: "ad73e021-615d-4c78-926e-af3b8812da9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.860496 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad73e021-615d-4c78-926e-af3b8812da9c-kube-api-access-m8d7n" (OuterVolumeSpecName: "kube-api-access-m8d7n") pod "ad73e021-615d-4c78-926e-af3b8812da9c" (UID: "ad73e021-615d-4c78-926e-af3b8812da9c"). InnerVolumeSpecName "kube-api-access-m8d7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.875751 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad73e021-615d-4c78-926e-af3b8812da9c" (UID: "ad73e021-615d-4c78-926e-af3b8812da9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.955003 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.955043 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.955061 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8d7n\" (UniqueName: \"kubernetes.io/projected/ad73e021-615d-4c78-926e-af3b8812da9c-kube-api-access-m8d7n\") on node \"crc\" DevicePath \"\"" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.150694 4985 generic.go:334] "Generic (PLEG): container finished" podID="ad73e021-615d-4c78-926e-af3b8812da9c" containerID="b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace" exitCode=0 Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.150736 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8696" event={"ID":"ad73e021-615d-4c78-926e-af3b8812da9c","Type":"ContainerDied","Data":"b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace"} Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.150764 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8696" event={"ID":"ad73e021-615d-4c78-926e-af3b8812da9c","Type":"ContainerDied","Data":"513bc7b6059f1e0b7811ca2a6e846ab89a1bd700812e1eb8437574fc3b92572e"} Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.150783 4985 scope.go:117] "RemoveContainer" containerID="b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.150797 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.183437 4985 scope.go:117] "RemoveContainer" containerID="763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.233129 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8696"] Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.238289 4985 scope.go:117] "RemoveContainer" containerID="b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.261863 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8696"] Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.325796 4985 scope.go:117] "RemoveContainer" containerID="b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace" Jan 28 19:52:24 crc kubenswrapper[4985]: E0128 19:52:24.326212 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace\": container with ID starting with b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace not found: ID does not exist" containerID="b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.326243 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace"} err="failed to get container status \"b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace\": rpc error: code = NotFound desc = could not find container \"b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace\": container with ID starting with b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace not found: ID does not exist" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.326274 4985 scope.go:117] "RemoveContainer" containerID="763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe" Jan 28 19:52:24 crc kubenswrapper[4985]: E0128 19:52:24.326599 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe\": container with ID starting with 763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe not found: ID does not exist" containerID="763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.326628 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe"} err="failed to get container status \"763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe\": rpc error: code = NotFound desc = could not find container \"763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe\": container with ID starting with 763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe not found: ID does not exist" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.326648 4985 scope.go:117] "RemoveContainer" containerID="b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a" Jan 28 19:52:24 crc kubenswrapper[4985]: E0128 19:52:24.327113 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a\": container with ID starting with b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a not found: ID does not exist" containerID="b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.327170 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a"} err="failed to get container status \"b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a\": rpc error: code = NotFound desc = could not find container \"b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a\": container with ID starting with b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a not found: ID does not exist" Jan 28 19:52:25 crc kubenswrapper[4985]: I0128 19:52:25.289786 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" path="/var/lib/kubelet/pods/ad73e021-615d-4c78-926e-af3b8812da9c/volumes" Jan 28 19:52:32 crc kubenswrapper[4985]: I0128 19:52:32.263773 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:52:32 crc kubenswrapper[4985]: E0128 19:52:32.265127 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:52:45 crc kubenswrapper[4985]: I0128 19:52:45.264513 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:52:45 crc kubenswrapper[4985]: E0128 19:52:45.265335 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:52:56 crc kubenswrapper[4985]: I0128 19:52:56.265858 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:52:56 crc kubenswrapper[4985]: E0128 19:52:56.267181 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:52:59 crc kubenswrapper[4985]: E0128 19:52:59.610793 4985 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.195:43584->38.102.83.195:43365: write tcp 38.102.83.195:43584->38.102.83.195:43365: write: broken pipe Jan 28 19:53:07 crc kubenswrapper[4985]: I0128 19:53:07.264473 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:53:07 crc kubenswrapper[4985]: E0128 19:53:07.265237 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:53:21 crc kubenswrapper[4985]: I0128 19:53:21.264328 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:53:21 crc kubenswrapper[4985]: I0128 19:53:21.946419 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"69284d970ac84e3960e3531fa9880703937f5211cd6b09b9884c28779b8c5182"} Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.129461 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5ftj6"] Jan 28 19:53:41 crc kubenswrapper[4985]: E0128 19:53:41.133885 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="registry-server" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.133915 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="registry-server" Jan 28 19:53:41 crc kubenswrapper[4985]: E0128 19:53:41.133969 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="extract-utilities" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.133983 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="extract-utilities" Jan 28 19:53:41 crc kubenswrapper[4985]: E0128 19:53:41.134010 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="extract-content" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.134024 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="extract-content" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.134488 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="registry-server" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.137846 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.143781 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5ftj6"] Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.176631 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6zbw\" (UniqueName: \"kubernetes.io/projected/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-kube-api-access-p6zbw\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.176722 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-utilities\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.176786 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-catalog-content\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.279305 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6zbw\" (UniqueName: \"kubernetes.io/projected/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-kube-api-access-p6zbw\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.279381 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-utilities\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.279437 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-catalog-content\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.279938 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-utilities\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.280122 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-catalog-content\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.307231 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6zbw\" (UniqueName: \"kubernetes.io/projected/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-kube-api-access-p6zbw\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.467173 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:42 crc kubenswrapper[4985]: I0128 19:53:42.011422 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5ftj6"] Jan 28 19:53:42 crc kubenswrapper[4985]: I0128 19:53:42.246064 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerStarted","Data":"cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0"} Jan 28 19:53:42 crc kubenswrapper[4985]: I0128 19:53:42.246131 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerStarted","Data":"753d0099d3c236eea1dc82804e44c58a26c20aeba82d466d277586f3d9937bb8"} Jan 28 19:53:43 crc kubenswrapper[4985]: I0128 19:53:43.262666 4985 generic.go:334] "Generic (PLEG): container finished" podID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerID="cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0" exitCode=0 Jan 28 19:53:43 crc kubenswrapper[4985]: I0128 19:53:43.262747 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerDied","Data":"cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0"} Jan 28 19:53:44 crc kubenswrapper[4985]: I0128 19:53:44.278790 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerStarted","Data":"40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd"} Jan 28 19:53:46 crc kubenswrapper[4985]: I0128 19:53:46.297830 4985 generic.go:334] "Generic (PLEG): container finished" podID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerID="40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd" exitCode=0 Jan 28 19:53:46 crc kubenswrapper[4985]: I0128 19:53:46.297948 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerDied","Data":"40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd"} Jan 28 19:53:47 crc kubenswrapper[4985]: I0128 19:53:47.308526 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerStarted","Data":"adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be"} Jan 28 19:53:48 crc kubenswrapper[4985]: I0128 19:53:48.349617 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5ftj6" podStartSLOduration=3.928772177 podStartE2EDuration="7.349597424s" podCreationTimestamp="2026-01-28 19:53:41 +0000 UTC" firstStartedPulling="2026-01-28 19:53:43.266638893 +0000 UTC m=+6034.093201734" lastFinishedPulling="2026-01-28 19:53:46.68746413 +0000 UTC m=+6037.514026981" observedRunningTime="2026-01-28 19:53:48.339399065 +0000 UTC m=+6039.165961886" watchObservedRunningTime="2026-01-28 19:53:48.349597424 +0000 UTC m=+6039.176160245" Jan 28 19:53:51 crc kubenswrapper[4985]: I0128 19:53:51.467859 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:51 crc kubenswrapper[4985]: I0128 19:53:51.468361 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:51 crc kubenswrapper[4985]: I0128 19:53:51.554393 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:52 crc kubenswrapper[4985]: I0128 19:53:52.444875 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:52 crc kubenswrapper[4985]: I0128 19:53:52.500735 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5ftj6"] Jan 28 19:53:54 crc kubenswrapper[4985]: I0128 19:53:54.405052 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5ftj6" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="registry-server" containerID="cri-o://adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be" gracePeriod=2 Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.088476 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.108074 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6zbw\" (UniqueName: \"kubernetes.io/projected/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-kube-api-access-p6zbw\") pod \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.108348 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-utilities\") pod \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.108405 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-catalog-content\") pod \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.109459 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-utilities" (OuterVolumeSpecName: "utilities") pod "abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" (UID: "abac5e1e-6e1b-4391-b00f-2b9c2162a8b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.116544 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-kube-api-access-p6zbw" (OuterVolumeSpecName: "kube-api-access-p6zbw") pod "abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" (UID: "abac5e1e-6e1b-4391-b00f-2b9c2162a8b0"). InnerVolumeSpecName "kube-api-access-p6zbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.210334 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.210373 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6zbw\" (UniqueName: \"kubernetes.io/projected/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-kube-api-access-p6zbw\") on node \"crc\" DevicePath \"\"" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.420745 4985 generic.go:334] "Generic (PLEG): container finished" podID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerID="adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be" exitCode=0 Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.420831 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerDied","Data":"adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be"} Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.421969 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerDied","Data":"753d0099d3c236eea1dc82804e44c58a26c20aeba82d466d277586f3d9937bb8"} Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.420873 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.422003 4985 scope.go:117] "RemoveContainer" containerID="adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.463892 4985 scope.go:117] "RemoveContainer" containerID="40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.500174 4985 scope.go:117] "RemoveContainer" containerID="cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.538385 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" (UID: "abac5e1e-6e1b-4391-b00f-2b9c2162a8b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.579189 4985 scope.go:117] "RemoveContainer" containerID="adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be" Jan 28 19:53:55 crc kubenswrapper[4985]: E0128 19:53:55.580011 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be\": container with ID starting with adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be not found: ID does not exist" containerID="adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.580070 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be"} err="failed to get container status \"adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be\": rpc error: code = NotFound desc = could not find container \"adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be\": container with ID starting with adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be not found: ID does not exist" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.580102 4985 scope.go:117] "RemoveContainer" containerID="40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd" Jan 28 19:53:55 crc kubenswrapper[4985]: E0128 19:53:55.580413 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd\": container with ID starting with 40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd not found: ID does not exist" containerID="40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.580445 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd"} err="failed to get container status \"40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd\": rpc error: code = NotFound desc = could not find container \"40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd\": container with ID starting with 40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd not found: ID does not exist" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.580465 4985 scope.go:117] "RemoveContainer" containerID="cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0" Jan 28 19:53:55 crc kubenswrapper[4985]: E0128 19:53:55.580720 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0\": container with ID starting with cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0 not found: ID does not exist" containerID="cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.580749 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0"} err="failed to get container status \"cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0\": rpc error: code = NotFound desc = could not find container \"cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0\": container with ID starting with cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0 not found: ID does not exist" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.621186 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.775753 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5ftj6"] Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.786805 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5ftj6"] Jan 28 19:53:57 crc kubenswrapper[4985]: I0128 19:53:57.276355 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" path="/var/lib/kubelet/pods/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0/volumes" Jan 28 19:55:41 crc kubenswrapper[4985]: I0128 19:55:41.186614 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:55:41 crc kubenswrapper[4985]: I0128 19:55:41.187210 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:56:11 crc kubenswrapper[4985]: I0128 19:56:11.186031 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:56:11 crc kubenswrapper[4985]: I0128 19:56:11.186573 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.187206 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.189094 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.189317 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.190550 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69284d970ac84e3960e3531fa9880703937f5211cd6b09b9884c28779b8c5182"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.190781 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://69284d970ac84e3960e3531fa9880703937f5211cd6b09b9884c28779b8c5182" gracePeriod=600 Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.604967 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="69284d970ac84e3960e3531fa9880703937f5211cd6b09b9884c28779b8c5182" exitCode=0 Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.605007 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"69284d970ac84e3960e3531fa9880703937f5211cd6b09b9884c28779b8c5182"} Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.605365 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8"} Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.605395 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:58:41 crc kubenswrapper[4985]: I0128 19:58:41.186846 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:58:41 crc kubenswrapper[4985]: I0128 19:58:41.187579 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:59:11 crc kubenswrapper[4985]: I0128 19:59:11.186045 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:59:11 crc kubenswrapper[4985]: I0128 19:59:11.186747 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:59:41 crc kubenswrapper[4985]: I0128 19:59:41.186033 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:59:41 crc kubenswrapper[4985]: I0128 19:59:41.186602 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:59:41 crc kubenswrapper[4985]: I0128 19:59:41.186671 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:59:41 crc kubenswrapper[4985]: I0128 19:59:41.187722 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:59:41 crc kubenswrapper[4985]: I0128 19:59:41.187787 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" gracePeriod=600 Jan 28 19:59:41 crc kubenswrapper[4985]: E0128 19:59:41.357467 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:59:42 crc kubenswrapper[4985]: I0128 19:59:42.049415 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" exitCode=0 Jan 28 19:59:42 crc kubenswrapper[4985]: I0128 19:59:42.049480 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8"} Jan 28 19:59:42 crc kubenswrapper[4985]: I0128 19:59:42.049809 4985 scope.go:117] "RemoveContainer" containerID="69284d970ac84e3960e3531fa9880703937f5211cd6b09b9884c28779b8c5182" Jan 28 19:59:42 crc kubenswrapper[4985]: I0128 19:59:42.050595 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 19:59:42 crc kubenswrapper[4985]: E0128 19:59:42.050942 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:59:54 crc kubenswrapper[4985]: I0128 19:59:54.264834 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 19:59:54 crc kubenswrapper[4985]: E0128 19:59:54.265564 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.180637 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf"] Jan 28 20:00:00 crc kubenswrapper[4985]: E0128 20:00:00.182894 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="extract-content" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.182925 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="extract-content" Jan 28 20:00:00 crc kubenswrapper[4985]: E0128 20:00:00.183124 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="extract-utilities" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.183136 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="extract-utilities" Jan 28 20:00:00 crc kubenswrapper[4985]: E0128 20:00:00.183167 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="registry-server" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.183176 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="registry-server" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.183947 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="registry-server" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.184956 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.187729 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.187799 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.194159 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf"] Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.258119 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7d78\" (UniqueName: \"kubernetes.io/projected/a139d0d8-6583-4fe1-b693-0a3162f84c9a-kube-api-access-s7d78\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.258186 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a139d0d8-6583-4fe1-b693-0a3162f84c9a-config-volume\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.258209 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a139d0d8-6583-4fe1-b693-0a3162f84c9a-secret-volume\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.362705 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7d78\" (UniqueName: \"kubernetes.io/projected/a139d0d8-6583-4fe1-b693-0a3162f84c9a-kube-api-access-s7d78\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.363113 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a139d0d8-6583-4fe1-b693-0a3162f84c9a-config-volume\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.363149 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a139d0d8-6583-4fe1-b693-0a3162f84c9a-secret-volume\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.365114 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a139d0d8-6583-4fe1-b693-0a3162f84c9a-config-volume\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.379489 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7d78\" (UniqueName: \"kubernetes.io/projected/a139d0d8-6583-4fe1-b693-0a3162f84c9a-kube-api-access-s7d78\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.379484 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a139d0d8-6583-4fe1-b693-0a3162f84c9a-secret-volume\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.520355 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:01 crc kubenswrapper[4985]: I0128 20:00:01.027267 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf"] Jan 28 20:00:01 crc kubenswrapper[4985]: I0128 20:00:01.317097 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" event={"ID":"a139d0d8-6583-4fe1-b693-0a3162f84c9a","Type":"ContainerStarted","Data":"91afb9bc81746a22360329a0c2e9c3578ef8fbe8cf8f47db2261a77cef8f47e7"} Jan 28 20:00:02 crc kubenswrapper[4985]: I0128 20:00:02.346513 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" event={"ID":"a139d0d8-6583-4fe1-b693-0a3162f84c9a","Type":"ContainerStarted","Data":"a6a9bf023f54cce16eb2987d2f250a0bda2a4d180506c19b25054195daba1f4f"} Jan 28 20:00:02 crc kubenswrapper[4985]: I0128 20:00:02.400826 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" podStartSLOduration=2.400796175 podStartE2EDuration="2.400796175s" podCreationTimestamp="2026-01-28 20:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 20:00:02.392576972 +0000 UTC m=+6413.219139823" watchObservedRunningTime="2026-01-28 20:00:02.400796175 +0000 UTC m=+6413.227359036" Jan 28 20:00:04 crc kubenswrapper[4985]: I0128 20:00:04.373135 4985 generic.go:334] "Generic (PLEG): container finished" podID="a139d0d8-6583-4fe1-b693-0a3162f84c9a" containerID="a6a9bf023f54cce16eb2987d2f250a0bda2a4d180506c19b25054195daba1f4f" exitCode=0 Jan 28 20:00:04 crc kubenswrapper[4985]: I0128 20:00:04.373347 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" event={"ID":"a139d0d8-6583-4fe1-b693-0a3162f84c9a","Type":"ContainerDied","Data":"a6a9bf023f54cce16eb2987d2f250a0bda2a4d180506c19b25054195daba1f4f"} Jan 28 20:00:05 crc kubenswrapper[4985]: I0128 20:00:05.875769 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.015489 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a139d0d8-6583-4fe1-b693-0a3162f84c9a-config-volume\") pod \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.015689 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7d78\" (UniqueName: \"kubernetes.io/projected/a139d0d8-6583-4fe1-b693-0a3162f84c9a-kube-api-access-s7d78\") pod \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.016447 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a139d0d8-6583-4fe1-b693-0a3162f84c9a-secret-volume\") pod \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.021216 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a139d0d8-6583-4fe1-b693-0a3162f84c9a-config-volume" (OuterVolumeSpecName: "config-volume") pod "a139d0d8-6583-4fe1-b693-0a3162f84c9a" (UID: "a139d0d8-6583-4fe1-b693-0a3162f84c9a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.028215 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a139d0d8-6583-4fe1-b693-0a3162f84c9a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a139d0d8-6583-4fe1-b693-0a3162f84c9a" (UID: "a139d0d8-6583-4fe1-b693-0a3162f84c9a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.029879 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a139d0d8-6583-4fe1-b693-0a3162f84c9a-kube-api-access-s7d78" (OuterVolumeSpecName: "kube-api-access-s7d78") pod "a139d0d8-6583-4fe1-b693-0a3162f84c9a" (UID: "a139d0d8-6583-4fe1-b693-0a3162f84c9a"). InnerVolumeSpecName "kube-api-access-s7d78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.121303 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a139d0d8-6583-4fe1-b693-0a3162f84c9a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.121347 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a139d0d8-6583-4fe1-b693-0a3162f84c9a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.121363 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7d78\" (UniqueName: \"kubernetes.io/projected/a139d0d8-6583-4fe1-b693-0a3162f84c9a-kube-api-access-s7d78\") on node \"crc\" DevicePath \"\"" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.399387 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" event={"ID":"a139d0d8-6583-4fe1-b693-0a3162f84c9a","Type":"ContainerDied","Data":"91afb9bc81746a22360329a0c2e9c3578ef8fbe8cf8f47db2261a77cef8f47e7"} Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.399431 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91afb9bc81746a22360329a0c2e9c3578ef8fbe8cf8f47db2261a77cef8f47e7" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.399878 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.463120 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7"] Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.477547 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7"] Jan 28 20:00:07 crc kubenswrapper[4985]: I0128 20:00:07.280625 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7f7054-2ff2-4045-aa35-4345b449dc70" path="/var/lib/kubelet/pods/dc7f7054-2ff2-4045-aa35-4345b449dc70/volumes" Jan 28 20:00:09 crc kubenswrapper[4985]: I0128 20:00:09.264551 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:00:09 crc kubenswrapper[4985]: E0128 20:00:09.265099 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.553793 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bq9kf"] Jan 28 20:00:14 crc kubenswrapper[4985]: E0128 20:00:14.554971 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a139d0d8-6583-4fe1-b693-0a3162f84c9a" containerName="collect-profiles" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.554988 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a139d0d8-6583-4fe1-b693-0a3162f84c9a" containerName="collect-profiles" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.555336 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a139d0d8-6583-4fe1-b693-0a3162f84c9a" containerName="collect-profiles" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.557557 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bq9kf"] Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.557656 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.684168 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxm4x\" (UniqueName: \"kubernetes.io/projected/3bc390cd-8043-4c98-b7ce-c12170795362-kube-api-access-gxm4x\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.684752 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-utilities\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.684805 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-catalog-content\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.788630 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-utilities\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.788703 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-catalog-content\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.788899 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxm4x\" (UniqueName: \"kubernetes.io/projected/3bc390cd-8043-4c98-b7ce-c12170795362-kube-api-access-gxm4x\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.789739 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-utilities\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.790018 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-catalog-content\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.815518 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxm4x\" (UniqueName: \"kubernetes.io/projected/3bc390cd-8043-4c98-b7ce-c12170795362-kube-api-access-gxm4x\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.914693 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:15 crc kubenswrapper[4985]: I0128 20:00:15.453537 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bq9kf"] Jan 28 20:00:15 crc kubenswrapper[4985]: I0128 20:00:15.521241 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerStarted","Data":"692e290ffd1bb0bf80c942964ddc2e19c3d4374e1f1bb6ba46b12a95e1c75bc8"} Jan 28 20:00:16 crc kubenswrapper[4985]: I0128 20:00:16.534401 4985 generic.go:334] "Generic (PLEG): container finished" podID="3bc390cd-8043-4c98-b7ce-c12170795362" containerID="fbcb4e57c66f42d19bfb4fb5f2f9a72f9458e83a1b7c389068e41fb01f3d54eb" exitCode=0 Jan 28 20:00:16 crc kubenswrapper[4985]: I0128 20:00:16.534452 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerDied","Data":"fbcb4e57c66f42d19bfb4fb5f2f9a72f9458e83a1b7c389068e41fb01f3d54eb"} Jan 28 20:00:16 crc kubenswrapper[4985]: I0128 20:00:16.537477 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 20:00:18 crc kubenswrapper[4985]: I0128 20:00:18.557878 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerStarted","Data":"274788a6ff58425f1ec3dc66cad627f3b9911ef7a411c12b374dd4064131c4fe"} Jan 28 20:00:20 crc kubenswrapper[4985]: I0128 20:00:20.597216 4985 generic.go:334] "Generic (PLEG): container finished" podID="3bc390cd-8043-4c98-b7ce-c12170795362" containerID="274788a6ff58425f1ec3dc66cad627f3b9911ef7a411c12b374dd4064131c4fe" exitCode=0 Jan 28 20:00:20 crc kubenswrapper[4985]: I0128 20:00:20.597285 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerDied","Data":"274788a6ff58425f1ec3dc66cad627f3b9911ef7a411c12b374dd4064131c4fe"} Jan 28 20:00:21 crc kubenswrapper[4985]: I0128 20:00:21.275722 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:00:21 crc kubenswrapper[4985]: E0128 20:00:21.276474 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:00:22 crc kubenswrapper[4985]: I0128 20:00:22.621395 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerStarted","Data":"a12e02f9a480b4c1e01983765be48bf37602ae67e23ecd56f0d62a1331d98c3e"} Jan 28 20:00:22 crc kubenswrapper[4985]: I0128 20:00:22.644882 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bq9kf" podStartSLOduration=3.857729909 podStartE2EDuration="8.644862698s" podCreationTimestamp="2026-01-28 20:00:14 +0000 UTC" firstStartedPulling="2026-01-28 20:00:16.537268565 +0000 UTC m=+6427.363831386" lastFinishedPulling="2026-01-28 20:00:21.324401344 +0000 UTC m=+6432.150964175" observedRunningTime="2026-01-28 20:00:22.64002121 +0000 UTC m=+6433.466584051" watchObservedRunningTime="2026-01-28 20:00:22.644862698 +0000 UTC m=+6433.471425519" Jan 28 20:00:24 crc kubenswrapper[4985]: I0128 20:00:24.915652 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:24 crc kubenswrapper[4985]: I0128 20:00:24.916047 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:24 crc kubenswrapper[4985]: I0128 20:00:24.969529 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:34 crc kubenswrapper[4985]: I0128 20:00:34.973384 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:35 crc kubenswrapper[4985]: I0128 20:00:35.028162 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bq9kf"] Jan 28 20:00:35 crc kubenswrapper[4985]: I0128 20:00:35.792151 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bq9kf" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="registry-server" containerID="cri-o://a12e02f9a480b4c1e01983765be48bf37602ae67e23ecd56f0d62a1331d98c3e" gracePeriod=2 Jan 28 20:00:36 crc kubenswrapper[4985]: I0128 20:00:36.265005 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:00:36 crc kubenswrapper[4985]: E0128 20:00:36.265745 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:00:36 crc kubenswrapper[4985]: I0128 20:00:36.807669 4985 generic.go:334] "Generic (PLEG): container finished" podID="3bc390cd-8043-4c98-b7ce-c12170795362" containerID="a12e02f9a480b4c1e01983765be48bf37602ae67e23ecd56f0d62a1331d98c3e" exitCode=0 Jan 28 20:00:36 crc kubenswrapper[4985]: I0128 20:00:36.807705 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerDied","Data":"a12e02f9a480b4c1e01983765be48bf37602ae67e23ecd56f0d62a1331d98c3e"} Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.024287 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.194364 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-catalog-content\") pod \"3bc390cd-8043-4c98-b7ce-c12170795362\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.194503 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-utilities\") pod \"3bc390cd-8043-4c98-b7ce-c12170795362\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.194766 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxm4x\" (UniqueName: \"kubernetes.io/projected/3bc390cd-8043-4c98-b7ce-c12170795362-kube-api-access-gxm4x\") pod \"3bc390cd-8043-4c98-b7ce-c12170795362\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.196110 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-utilities" (OuterVolumeSpecName: "utilities") pod "3bc390cd-8043-4c98-b7ce-c12170795362" (UID: "3bc390cd-8043-4c98-b7ce-c12170795362"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.209674 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc390cd-8043-4c98-b7ce-c12170795362-kube-api-access-gxm4x" (OuterVolumeSpecName: "kube-api-access-gxm4x") pod "3bc390cd-8043-4c98-b7ce-c12170795362" (UID: "3bc390cd-8043-4c98-b7ce-c12170795362"). InnerVolumeSpecName "kube-api-access-gxm4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.259223 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bc390cd-8043-4c98-b7ce-c12170795362" (UID: "3bc390cd-8043-4c98-b7ce-c12170795362"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.297542 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxm4x\" (UniqueName: \"kubernetes.io/projected/3bc390cd-8043-4c98-b7ce-c12170795362-kube-api-access-gxm4x\") on node \"crc\" DevicePath \"\"" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.297573 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.297582 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.822820 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerDied","Data":"692e290ffd1bb0bf80c942964ddc2e19c3d4374e1f1bb6ba46b12a95e1c75bc8"} Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.822887 4985 scope.go:117] "RemoveContainer" containerID="a12e02f9a480b4c1e01983765be48bf37602ae67e23ecd56f0d62a1331d98c3e" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.822938 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.855061 4985 scope.go:117] "RemoveContainer" containerID="274788a6ff58425f1ec3dc66cad627f3b9911ef7a411c12b374dd4064131c4fe" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.857747 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bq9kf"] Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.870321 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bq9kf"] Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.883626 4985 scope.go:117] "RemoveContainer" containerID="fbcb4e57c66f42d19bfb4fb5f2f9a72f9458e83a1b7c389068e41fb01f3d54eb" Jan 28 20:00:39 crc kubenswrapper[4985]: I0128 20:00:39.280592 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" path="/var/lib/kubelet/pods/3bc390cd-8043-4c98-b7ce-c12170795362/volumes" Jan 28 20:00:43 crc kubenswrapper[4985]: I0128 20:00:43.872071 4985 scope.go:117] "RemoveContainer" containerID="338f8d06b8e77092f3ed49ded314fa263d3bc00689eede0c01a39e28fc35ddd0" Jan 28 20:00:49 crc kubenswrapper[4985]: I0128 20:00:49.264361 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:00:49 crc kubenswrapper[4985]: E0128 20:00:49.265295 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.529130 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 20:00:51 crc kubenswrapper[4985]: E0128 20:00:51.530455 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="extract-utilities" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.530483 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="extract-utilities" Jan 28 20:00:51 crc kubenswrapper[4985]: E0128 20:00:51.530522 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="extract-content" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.530533 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="extract-content" Jan 28 20:00:51 crc kubenswrapper[4985]: E0128 20:00:51.530557 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="registry-server" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.530568 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="registry-server" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.530944 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="registry-server" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.532238 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.535926 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hb5cc" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.536563 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.537719 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.540380 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.542639 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.578412 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.578467 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.578583 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-config-data\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681158 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681213 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681304 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-config-data\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681349 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681376 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681397 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5tss\" (UniqueName: \"kubernetes.io/projected/a808dc72-a951-4f07-a612-2fde39a49a30-kube-api-access-f5tss\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681429 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681504 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.682114 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.683845 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-config-data\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.688828 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784187 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784261 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784290 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784313 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5tss\" (UniqueName: \"kubernetes.io/projected/a808dc72-a951-4f07-a612-2fde39a49a30-kube-api-access-f5tss\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784349 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784464 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784934 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784961 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.789035 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.789775 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.790672 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.811734 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5tss\" (UniqueName: \"kubernetes.io/projected/a808dc72-a951-4f07-a612-2fde39a49a30-kube-api-access-f5tss\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.824682 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.874701 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 20:00:52 crc kubenswrapper[4985]: W0128 20:00:52.392747 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda808dc72_a951_4f07_a612_2fde39a49a30.slice/crio-8ac53f28924ef34914b8f13ae4189420fe54cce41ee264f85ce7e1f954e89840 WatchSource:0}: Error finding container 8ac53f28924ef34914b8f13ae4189420fe54cce41ee264f85ce7e1f954e89840: Status 404 returned error can't find the container with id 8ac53f28924ef34914b8f13ae4189420fe54cce41ee264f85ce7e1f954e89840 Jan 28 20:00:52 crc kubenswrapper[4985]: I0128 20:00:52.392904 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 20:00:53 crc kubenswrapper[4985]: I0128 20:00:53.020749 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a808dc72-a951-4f07-a612-2fde39a49a30","Type":"ContainerStarted","Data":"8ac53f28924ef34914b8f13ae4189420fe54cce41ee264f85ce7e1f954e89840"} Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.368068 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-spssk"] Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.372481 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.385374 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spssk"] Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.571924 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-utilities\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.572345 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-catalog-content\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.572457 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blvfb\" (UniqueName: \"kubernetes.io/projected/0762e6e7-b454-432f-91b7-b8cefccdc85e-kube-api-access-blvfb\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.674344 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-catalog-content\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.674404 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blvfb\" (UniqueName: \"kubernetes.io/projected/0762e6e7-b454-432f-91b7-b8cefccdc85e-kube-api-access-blvfb\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.674510 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-utilities\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.675046 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-catalog-content\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.675067 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-utilities\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.708155 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blvfb\" (UniqueName: \"kubernetes.io/projected/0762e6e7-b454-432f-91b7-b8cefccdc85e-kube-api-access-blvfb\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:55 crc kubenswrapper[4985]: I0128 20:00:55.001281 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:55 crc kubenswrapper[4985]: I0128 20:00:55.644660 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spssk"] Jan 28 20:00:56 crc kubenswrapper[4985]: I0128 20:00:56.066931 4985 generic.go:334] "Generic (PLEG): container finished" podID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerID="3c2283779a914e25036c37ef2827bd05492395f0fd0244baa58d85cf05f996a1" exitCode=0 Jan 28 20:00:56 crc kubenswrapper[4985]: I0128 20:00:56.067113 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerDied","Data":"3c2283779a914e25036c37ef2827bd05492395f0fd0244baa58d85cf05f996a1"} Jan 28 20:00:56 crc kubenswrapper[4985]: I0128 20:00:56.067208 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerStarted","Data":"28f0a59519c9b60c4ce3a2ff63447bff887c38b436a2ce97a8fb8d2c39a8e834"} Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.245854 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29493841-rkhj6"] Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.249076 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.264277 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493841-rkhj6"] Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.422138 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-combined-ca-bundle\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.422434 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfbbg\" (UniqueName: \"kubernetes.io/projected/c901d430-df5f-4afa-8a40-9ed18d2ad552-kube-api-access-zfbbg\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.422462 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-config-data\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.423383 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-fernet-keys\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.525753 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-fernet-keys\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.525909 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-combined-ca-bundle\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.525999 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-config-data\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.526025 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfbbg\" (UniqueName: \"kubernetes.io/projected/c901d430-df5f-4afa-8a40-9ed18d2ad552-kube-api-access-zfbbg\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.532595 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-combined-ca-bundle\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.540640 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-fernet-keys\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.541567 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-config-data\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.543480 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfbbg\" (UniqueName: \"kubernetes.io/projected/c901d430-df5f-4afa-8a40-9ed18d2ad552-kube-api-access-zfbbg\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.621401 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:04 crc kubenswrapper[4985]: I0128 20:01:04.264946 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:01:04 crc kubenswrapper[4985]: E0128 20:01:04.266021 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:01:08 crc kubenswrapper[4985]: W0128 20:01:08.503495 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc901d430_df5f_4afa_8a40_9ed18d2ad552.slice/crio-f0418f93411fb9dc138c97d2c50934d37228bbc243645ed6f96e4e8ee69e3b1d WatchSource:0}: Error finding container f0418f93411fb9dc138c97d2c50934d37228bbc243645ed6f96e4e8ee69e3b1d: Status 404 returned error can't find the container with id f0418f93411fb9dc138c97d2c50934d37228bbc243645ed6f96e4e8ee69e3b1d Jan 28 20:01:08 crc kubenswrapper[4985]: I0128 20:01:08.508422 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493841-rkhj6"] Jan 28 20:01:09 crc kubenswrapper[4985]: I0128 20:01:09.233682 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493841-rkhj6" event={"ID":"c901d430-df5f-4afa-8a40-9ed18d2ad552","Type":"ContainerStarted","Data":"f0418f93411fb9dc138c97d2c50934d37228bbc243645ed6f96e4e8ee69e3b1d"} Jan 28 20:01:10 crc kubenswrapper[4985]: I0128 20:01:10.254228 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493841-rkhj6" event={"ID":"c901d430-df5f-4afa-8a40-9ed18d2ad552","Type":"ContainerStarted","Data":"add1992ce6f5ead56094c5643c8729c313a9a2d5dd2d22b565d4688777afae96"} Jan 28 20:01:10 crc kubenswrapper[4985]: I0128 20:01:10.283542 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29493841-rkhj6" podStartSLOduration=10.283515959 podStartE2EDuration="10.283515959s" podCreationTimestamp="2026-01-28 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 20:01:10.279278818 +0000 UTC m=+6481.105841669" watchObservedRunningTime="2026-01-28 20:01:10.283515959 +0000 UTC m=+6481.110078810" Jan 28 20:01:16 crc kubenswrapper[4985]: I0128 20:01:16.263828 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:01:16 crc kubenswrapper[4985]: E0128 20:01:16.264634 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:01:31 crc kubenswrapper[4985]: I0128 20:01:31.277588 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:01:31 crc kubenswrapper[4985]: E0128 20:01:31.278440 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:01:43 crc kubenswrapper[4985]: I0128 20:01:43.688972 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.532560472s: [/var/lib/containers/storage/overlay/1c5d844420c9e6694b90098e23024dca450ee6c45edf1bee0c323f8999be7645/diff /var/log/pods/openstack_openstack-galera-0_43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8/galera/0.log]; will not log again for this container unless duration exceeds 2s Jan 28 20:01:44 crc kubenswrapper[4985]: I0128 20:01:44.264883 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:01:44 crc kubenswrapper[4985]: E0128 20:01:44.266885 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:01:59 crc kubenswrapper[4985]: I0128 20:01:59.264848 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:01:59 crc kubenswrapper[4985]: E0128 20:01:59.265865 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:02:02 crc kubenswrapper[4985]: I0128 20:02:02.843773 4985 trace.go:236] Trace[2066331907]: "Calculate volume metrics of ca-trust-extracted for pod openshift-image-registry/image-registry-66df7c8f76-77p8r" (28-Jan-2026 20:02:01.238) (total time: 1481ms): Jan 28 20:02:02 crc kubenswrapper[4985]: Trace[2066331907]: [1.481882264s] [1.481882264s] END Jan 28 20:02:04 crc kubenswrapper[4985]: I0128 20:02:04.418822 4985 trace.go:236] Trace[22073964]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/certified-operators-mclkd" (28-Jan-2026 20:02:02.408) (total time: 2010ms): Jan 28 20:02:04 crc kubenswrapper[4985]: Trace[22073964]: [2.010191273s] [2.010191273s] END Jan 28 20:02:04 crc kubenswrapper[4985]: I0128 20:02:04.501396 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 3.379233153s: [/var/lib/containers/storage/overlay/2b74aa33c03668223a87dd3c1ff4a84a09224e18713c6538d4c947dab78be4d8/diff /var/log/pods/openstack_openstackclient_1d8f391e-0ed3-4969-b61b-5b9d602644fa/openstackclient/0.log]; will not log again for this container unless duration exceeds 2s Jan 28 20:02:07 crc kubenswrapper[4985]: I0128 20:02:07.031418 4985 generic.go:334] "Generic (PLEG): container finished" podID="c901d430-df5f-4afa-8a40-9ed18d2ad552" containerID="add1992ce6f5ead56094c5643c8729c313a9a2d5dd2d22b565d4688777afae96" exitCode=0 Jan 28 20:02:07 crc kubenswrapper[4985]: I0128 20:02:07.031562 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493841-rkhj6" event={"ID":"c901d430-df5f-4afa-8a40-9ed18d2ad552","Type":"ContainerDied","Data":"add1992ce6f5ead56094c5643c8729c313a9a2d5dd2d22b565d4688777afae96"} Jan 28 20:02:12 crc kubenswrapper[4985]: I0128 20:02:12.264857 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:02:12 crc kubenswrapper[4985]: E0128 20:02:12.266492 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.351294 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h4kmr"] Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.354424 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.391524 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-utilities\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.391583 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-catalog-content\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.391788 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fzzn\" (UniqueName: \"kubernetes.io/projected/e90a8845-3321-45ae-8c9d-524afa36cdd7-kube-api-access-8fzzn\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.493652 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fzzn\" (UniqueName: \"kubernetes.io/projected/e90a8845-3321-45ae-8c9d-524afa36cdd7-kube-api-access-8fzzn\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.493802 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-utilities\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.493859 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-catalog-content\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.494343 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-utilities\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.496774 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-catalog-content\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.520643 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4kmr"] Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.545445 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fzzn\" (UniqueName: \"kubernetes.io/projected/e90a8845-3321-45ae-8c9d-524afa36cdd7-kube-api-access-8fzzn\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.732686 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.790060 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.863787 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-combined-ca-bundle\") pod \"c901d430-df5f-4afa-8a40-9ed18d2ad552\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.863866 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-fernet-keys\") pod \"c901d430-df5f-4afa-8a40-9ed18d2ad552\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.864095 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-config-data\") pod \"c901d430-df5f-4afa-8a40-9ed18d2ad552\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.864268 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfbbg\" (UniqueName: \"kubernetes.io/projected/c901d430-df5f-4afa-8a40-9ed18d2ad552-kube-api-access-zfbbg\") pod \"c901d430-df5f-4afa-8a40-9ed18d2ad552\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.881868 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c901d430-df5f-4afa-8a40-9ed18d2ad552-kube-api-access-zfbbg" (OuterVolumeSpecName: "kube-api-access-zfbbg") pod "c901d430-df5f-4afa-8a40-9ed18d2ad552" (UID: "c901d430-df5f-4afa-8a40-9ed18d2ad552"). InnerVolumeSpecName "kube-api-access-zfbbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.882381 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c901d430-df5f-4afa-8a40-9ed18d2ad552" (UID: "c901d430-df5f-4afa-8a40-9ed18d2ad552"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.915969 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c901d430-df5f-4afa-8a40-9ed18d2ad552" (UID: "c901d430-df5f-4afa-8a40-9ed18d2ad552"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.942116 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-config-data" (OuterVolumeSpecName: "config-data") pod "c901d430-df5f-4afa-8a40-9ed18d2ad552" (UID: "c901d430-df5f-4afa-8a40-9ed18d2ad552"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.966981 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.967023 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfbbg\" (UniqueName: \"kubernetes.io/projected/c901d430-df5f-4afa-8a40-9ed18d2ad552-kube-api-access-zfbbg\") on node \"crc\" DevicePath \"\"" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.967035 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.967043 4985 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 20:02:22 crc kubenswrapper[4985]: E0128 20:02:22.997779 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 28 20:02:23 crc kubenswrapper[4985]: E0128 20:02:23.002886 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5tss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a808dc72-a951-4f07-a612-2fde39a49a30): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 20:02:23 crc kubenswrapper[4985]: E0128 20:02:23.004913 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a808dc72-a951-4f07-a612-2fde39a49a30" Jan 28 20:02:23 crc kubenswrapper[4985]: I0128 20:02:23.259108 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493841-rkhj6" event={"ID":"c901d430-df5f-4afa-8a40-9ed18d2ad552","Type":"ContainerDied","Data":"f0418f93411fb9dc138c97d2c50934d37228bbc243645ed6f96e4e8ee69e3b1d"} Jan 28 20:02:23 crc kubenswrapper[4985]: I0128 20:02:23.259621 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0418f93411fb9dc138c97d2c50934d37228bbc243645ed6f96e4e8ee69e3b1d" Jan 28 20:02:23 crc kubenswrapper[4985]: I0128 20:02:23.259154 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:02:23 crc kubenswrapper[4985]: E0128 20:02:23.271577 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a808dc72-a951-4f07-a612-2fde39a49a30" Jan 28 20:02:23 crc kubenswrapper[4985]: I0128 20:02:23.584342 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4kmr"] Jan 28 20:02:23 crc kubenswrapper[4985]: W0128 20:02:23.590556 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode90a8845_3321_45ae_8c9d_524afa36cdd7.slice/crio-7e51ef6d76839376c24d7507a45b3c60c636dc46cf99e59655b204bbb908ed06 WatchSource:0}: Error finding container 7e51ef6d76839376c24d7507a45b3c60c636dc46cf99e59655b204bbb908ed06: Status 404 returned error can't find the container with id 7e51ef6d76839376c24d7507a45b3c60c636dc46cf99e59655b204bbb908ed06 Jan 28 20:02:24 crc kubenswrapper[4985]: I0128 20:02:24.273627 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerStarted","Data":"dda8ac60f550a2e96f02464275f0b11a82d9a3d53d2e2270e9d67c06ea4c3b44"} Jan 28 20:02:24 crc kubenswrapper[4985]: I0128 20:02:24.276171 4985 generic.go:334] "Generic (PLEG): container finished" podID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerID="eaa8b31fd567cbe5402dee337791c77b7d17c2a64b306b5f934b501e7555c359" exitCode=0 Jan 28 20:02:24 crc kubenswrapper[4985]: I0128 20:02:24.276218 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerDied","Data":"eaa8b31fd567cbe5402dee337791c77b7d17c2a64b306b5f934b501e7555c359"} Jan 28 20:02:24 crc kubenswrapper[4985]: I0128 20:02:24.276401 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerStarted","Data":"7e51ef6d76839376c24d7507a45b3c60c636dc46cf99e59655b204bbb908ed06"} Jan 28 20:02:26 crc kubenswrapper[4985]: I0128 20:02:26.264200 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:02:26 crc kubenswrapper[4985]: E0128 20:02:26.265275 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:02:30 crc kubenswrapper[4985]: I0128 20:02:30.352141 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerStarted","Data":"6aae3f87a8a75e8de0eb7f2174fb7e1ad791b3b13463186c8a127596ad993426"} Jan 28 20:02:35 crc kubenswrapper[4985]: I0128 20:02:35.701498 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:02:35 crc kubenswrapper[4985]: I0128 20:02:35.701901 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:02:37 crc kubenswrapper[4985]: I0128 20:02:37.004781 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" podUID="4fa1b302-aad3-4e6e-9cd2-bba65262c1e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:02:38 crc kubenswrapper[4985]: I0128 20:02:38.263873 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:02:38 crc kubenswrapper[4985]: E0128 20:02:38.264504 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:02:42 crc kubenswrapper[4985]: I0128 20:02:42.921670 4985 generic.go:334] "Generic (PLEG): container finished" podID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerID="6aae3f87a8a75e8de0eb7f2174fb7e1ad791b3b13463186c8a127596ad993426" exitCode=0 Jan 28 20:02:42 crc kubenswrapper[4985]: I0128 20:02:42.921793 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerDied","Data":"6aae3f87a8a75e8de0eb7f2174fb7e1ad791b3b13463186c8a127596ad993426"} Jan 28 20:02:43 crc kubenswrapper[4985]: I0128 20:02:43.118723 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 28 20:02:44 crc kubenswrapper[4985]: I0128 20:02:44.944899 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerStarted","Data":"5651818473f4b98cbff41942fcaaaa5a4dff77b8a26838075287437237018599"} Jan 28 20:02:44 crc kubenswrapper[4985]: I0128 20:02:44.986645 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h4kmr" podStartSLOduration=12.541055053000001 podStartE2EDuration="31.986608699s" podCreationTimestamp="2026-01-28 20:02:13 +0000 UTC" firstStartedPulling="2026-01-28 20:02:24.279168666 +0000 UTC m=+6555.105731497" lastFinishedPulling="2026-01-28 20:02:43.724722322 +0000 UTC m=+6574.551285143" observedRunningTime="2026-01-28 20:02:44.96268298 +0000 UTC m=+6575.789245801" watchObservedRunningTime="2026-01-28 20:02:44.986608699 +0000 UTC m=+6575.813171520" Jan 28 20:02:46 crc kubenswrapper[4985]: I0128 20:02:46.976820 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a808dc72-a951-4f07-a612-2fde39a49a30","Type":"ContainerStarted","Data":"ee163311dba6c1ce70ff2544f9371712e8075bba77bbad31800b493e5588741e"} Jan 28 20:02:47 crc kubenswrapper[4985]: I0128 20:02:47.008126 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=6.288863188 podStartE2EDuration="1m57.008098843s" podCreationTimestamp="2026-01-28 20:00:50 +0000 UTC" firstStartedPulling="2026-01-28 20:00:52.395828508 +0000 UTC m=+6463.222391349" lastFinishedPulling="2026-01-28 20:02:43.115064183 +0000 UTC m=+6573.941627004" observedRunningTime="2026-01-28 20:02:46.992633124 +0000 UTC m=+6577.819195945" watchObservedRunningTime="2026-01-28 20:02:47.008098843 +0000 UTC m=+6577.834661694" Jan 28 20:02:47 crc kubenswrapper[4985]: I0128 20:02:47.989394 4985 generic.go:334] "Generic (PLEG): container finished" podID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerID="dda8ac60f550a2e96f02464275f0b11a82d9a3d53d2e2270e9d67c06ea4c3b44" exitCode=0 Jan 28 20:02:47 crc kubenswrapper[4985]: I0128 20:02:47.989464 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerDied","Data":"dda8ac60f550a2e96f02464275f0b11a82d9a3d53d2e2270e9d67c06ea4c3b44"} Jan 28 20:02:50 crc kubenswrapper[4985]: I0128 20:02:50.023145 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerStarted","Data":"2557bb987631cc8664db3ca41a93039f004fa96ab105b36b4deb767b758e348c"} Jan 28 20:02:50 crc kubenswrapper[4985]: I0128 20:02:50.060079 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-spssk" podStartSLOduration=11.175441512 podStartE2EDuration="1m56.060054744s" podCreationTimestamp="2026-01-28 20:00:54 +0000 UTC" firstStartedPulling="2026-01-28 20:01:04.06274246 +0000 UTC m=+6474.889305301" lastFinishedPulling="2026-01-28 20:02:48.947355702 +0000 UTC m=+6579.773918533" observedRunningTime="2026-01-28 20:02:50.056536044 +0000 UTC m=+6580.883098865" watchObservedRunningTime="2026-01-28 20:02:50.060054744 +0000 UTC m=+6580.886617575" Jan 28 20:02:51 crc kubenswrapper[4985]: I0128 20:02:51.276351 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:02:51 crc kubenswrapper[4985]: E0128 20:02:51.276889 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:02:53 crc kubenswrapper[4985]: I0128 20:02:53.732816 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:53 crc kubenswrapper[4985]: I0128 20:02:53.733449 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:54 crc kubenswrapper[4985]: I0128 20:02:54.787294 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-h4kmr" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="registry-server" probeResult="failure" output=< Jan 28 20:02:54 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:02:54 crc kubenswrapper[4985]: > Jan 28 20:02:55 crc kubenswrapper[4985]: I0128 20:02:55.002542 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:02:55 crc kubenswrapper[4985]: I0128 20:02:55.002855 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:02:56 crc kubenswrapper[4985]: I0128 20:02:56.054505 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:02:56 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:02:56 crc kubenswrapper[4985]: > Jan 28 20:03:04 crc kubenswrapper[4985]: I0128 20:03:04.784886 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-h4kmr" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:04 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:04 crc kubenswrapper[4985]: > Jan 28 20:03:06 crc kubenswrapper[4985]: I0128 20:03:06.051789 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:06 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:06 crc kubenswrapper[4985]: > Jan 28 20:03:06 crc kubenswrapper[4985]: I0128 20:03:06.264839 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:03:06 crc kubenswrapper[4985]: E0128 20:03:06.265545 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:03:13 crc kubenswrapper[4985]: I0128 20:03:13.798114 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:03:13 crc kubenswrapper[4985]: I0128 20:03:13.863024 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:03:14 crc kubenswrapper[4985]: I0128 20:03:14.042349 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4kmr"] Jan 28 20:03:15 crc kubenswrapper[4985]: I0128 20:03:15.353738 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h4kmr" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="registry-server" containerID="cri-o://5651818473f4b98cbff41942fcaaaa5a4dff77b8a26838075287437237018599" gracePeriod=2 Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.074474 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:17 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:17 crc kubenswrapper[4985]: > Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.356596 4985 generic.go:334] "Generic (PLEG): container finished" podID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerID="5651818473f4b98cbff41942fcaaaa5a4dff77b8a26838075287437237018599" exitCode=0 Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.356635 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerDied","Data":"5651818473f4b98cbff41942fcaaaa5a4dff77b8a26838075287437237018599"} Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.356660 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerDied","Data":"7e51ef6d76839376c24d7507a45b3c60c636dc46cf99e59655b204bbb908ed06"} Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.356671 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e51ef6d76839376c24d7507a45b3c60c636dc46cf99e59655b204bbb908ed06" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.402012 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.491639 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fzzn\" (UniqueName: \"kubernetes.io/projected/e90a8845-3321-45ae-8c9d-524afa36cdd7-kube-api-access-8fzzn\") pod \"e90a8845-3321-45ae-8c9d-524afa36cdd7\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.491785 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-utilities\") pod \"e90a8845-3321-45ae-8c9d-524afa36cdd7\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.491925 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-catalog-content\") pod \"e90a8845-3321-45ae-8c9d-524afa36cdd7\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.502040 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-utilities" (OuterVolumeSpecName: "utilities") pod "e90a8845-3321-45ae-8c9d-524afa36cdd7" (UID: "e90a8845-3321-45ae-8c9d-524afa36cdd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.524036 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e90a8845-3321-45ae-8c9d-524afa36cdd7" (UID: "e90a8845-3321-45ae-8c9d-524afa36cdd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.529750 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e90a8845-3321-45ae-8c9d-524afa36cdd7-kube-api-access-8fzzn" (OuterVolumeSpecName: "kube-api-access-8fzzn") pod "e90a8845-3321-45ae-8c9d-524afa36cdd7" (UID: "e90a8845-3321-45ae-8c9d-524afa36cdd7"). InnerVolumeSpecName "kube-api-access-8fzzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.594840 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.594874 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.594887 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fzzn\" (UniqueName: \"kubernetes.io/projected/e90a8845-3321-45ae-8c9d-524afa36cdd7-kube-api-access-8fzzn\") on node \"crc\" DevicePath \"\"" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:17.366929 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:17.404990 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4kmr"] Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:17.416843 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4kmr"] Jan 28 20:03:19 crc kubenswrapper[4985]: I0128 20:03:19.283158 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" path="/var/lib/kubelet/pods/e90a8845-3321-45ae-8c9d-524afa36cdd7/volumes" Jan 28 20:03:21 crc kubenswrapper[4985]: I0128 20:03:21.276150 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:03:21 crc kubenswrapper[4985]: E0128 20:03:21.276970 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:03:26 crc kubenswrapper[4985]: I0128 20:03:26.057230 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:26 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:26 crc kubenswrapper[4985]: > Jan 28 20:03:34 crc kubenswrapper[4985]: I0128 20:03:34.264338 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:03:34 crc kubenswrapper[4985]: E0128 20:03:34.265078 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:03:36 crc kubenswrapper[4985]: I0128 20:03:36.154876 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:36 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:36 crc kubenswrapper[4985]: > Jan 28 20:03:46 crc kubenswrapper[4985]: I0128 20:03:46.102068 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:46 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:46 crc kubenswrapper[4985]: > Jan 28 20:03:46 crc kubenswrapper[4985]: I0128 20:03:46.273111 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:03:46 crc kubenswrapper[4985]: E0128 20:03:46.274545 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:03:54 crc kubenswrapper[4985]: I0128 20:03:54.900278 4985 patch_prober.go:28] interesting pod/metrics-server-6845d579bb-9lznf container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:54 crc kubenswrapper[4985]: I0128 20:03:54.938634 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podUID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:55 crc kubenswrapper[4985]: I0128 20:03:55.107071 4985 patch_prober.go:28] interesting pod/monitoring-plugin-868c9846bf-6bwkl container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:55 crc kubenswrapper[4985]: I0128 20:03:55.107141 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" podUID="54abc3c0-c9d2-49a3-bc29-854369637b99" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:56 crc kubenswrapper[4985]: I0128 20:03:56.659686 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:56 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:56 crc kubenswrapper[4985]: > Jan 28 20:03:57 crc kubenswrapper[4985]: I0128 20:03:57.329539 4985 patch_prober.go:28] interesting pod/console-74779d9b4-2xxwx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:57 crc kubenswrapper[4985]: I0128 20:03:57.329879 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-74779d9b4-2xxwx" podUID="6b348b0a-4b9a-4216-adbf-02bcefe1f011" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:57 crc kubenswrapper[4985]: I0128 20:03:57.546571 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:57 crc kubenswrapper[4985]: I0128 20:03:57.591475 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:57 crc kubenswrapper[4985]: I0128 20:03:57.771442 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" podUID="9c7284ab-b40f-4275-b85e-77aebd660135" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.105475 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.105535 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.105572 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.105659 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.219554 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.219833 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.219558 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.219906 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.260519 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.314557 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.350438 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: E0128 20:03:58.385312 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.833964 4985 trace.go:236] Trace[252571599]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (28-Jan-2026 20:03:56.440) (total time: 2359ms): Jan 28 20:03:58 crc kubenswrapper[4985]: Trace[252571599]: [2.359531652s] [2.359531652s] END Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.014231 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podUID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.057428 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podUID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.233453 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.233504 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.281692 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" podUID="c77a825c-f720-48a7-b74f-49b16e3ecbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.702525 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.702540 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.737737 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.737786 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.737815 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.737847 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.745161 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-gkjzc" podUID="8f0319d2-9602-42b4-a3fb-c53bf5d3c244" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.849834 4985 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-pcb4d container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.849924 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" podUID="be08d23e-d6c9-4b42-904b-c36b05dfc316" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.039280 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.039392 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.039308 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.039526 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.107281 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.107359 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.107268 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.107478 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.340458 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podUID="c1e8524e-e047-4872-9ee1-ae4e013f8825" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.340646 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podUID="c1e8524e-e047-4872-9ee1-ae4e013f8825" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.464396 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.547440 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.547724 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.547762 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.548083 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.639755 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.639827 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.640437 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.640457 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.005550 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.005550 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.005861 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.005933 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.167430 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.167628 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.328730 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.328779 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.328837 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.328786 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.365175 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.365233 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.366690 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.366721 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.373021 4985 patch_prober.go:28] interesting pod/thanos-querier-5695687f7c-8tcz2 container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.76:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.373094 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podUID="1a0dd00c-a59d-4e21-968c-b1a7b1198758" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.373120 4985 patch_prober.go:28] interesting pod/thanos-querier-5695687f7c-8tcz2 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.373177 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podUID="1a0dd00c-a59d-4e21-968c-b1a7b1198758" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.582455 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.582766 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.582619 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.582885 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697438 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697510 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697587 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697613 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697449 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697652 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697760 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697815 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.815168 4985 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.815258 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.046645 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.046781 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.190579 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.190664 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.731110 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.731437 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.732824 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.732832 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.331272 4985 patch_prober.go:28] interesting pod/loki-operator-controller-manager-85fc96dbd6-9qljj container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.331669 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podUID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.494347 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.494425 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.727070 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" podUID="70329607-4bbe-43ad-bb7a-2b62f26af473" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.727128 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" podUID="70329607-4bbe-43ad-bb7a-2b62f26af473" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.736082 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.742820 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.742939 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.743834 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.015462 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.254:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.016558 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.254:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.409492 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.409621 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.409949 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.409974 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.497501 4985 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.497577 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.631111 4985 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-jrf9w container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.631166 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" podUID="645ec0ef-97a6-4e2f-b691-ffcbcab4eed7" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.734869 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.735214 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.735473 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.735659 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.767111 4985 patch_prober.go:28] interesting pod/metrics-server-6845d579bb-9lznf container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": context deadline exceeded" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.767186 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podUID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": context deadline exceeded" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.768125 4985 patch_prober.go:28] interesting pod/metrics-server-6845d579bb-9lznf container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.768149 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podUID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.794554 4985 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-dkn9m container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.794612 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" podUID="21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.879871 4985 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-pcd6x container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.879941 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" podUID="5c56d4fe-62c7-47ef-9a0f-607d899d19b8" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.963559 4985 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-2755m container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.963612 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" podUID="effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.171742 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": context deadline exceeded" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.171803 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": context deadline exceeded" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.171875 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.171893 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.172130 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.172179 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.174996 4985 patch_prober.go:28] interesting pod/monitoring-plugin-868c9846bf-6bwkl container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": context deadline exceeded" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.175045 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" podUID="54abc3c0-c9d2-49a3-bc29-854369637b99" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": context deadline exceeded" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.175188 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.175209 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.547574 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.547637 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.547673 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.547738 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.672715 4985 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-j7z4h container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.672787 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.735518 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.793693 4985 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-dkn9m container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.793793 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" podUID="21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.818531 4985 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.818624 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e322915e-933c-4de4-98dd-ef047ee5b056" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.861246 4985 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.861327 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="ac72f54d-936d-4c98-9f91-918f7a05b5d1" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.878791 4985 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-pcd6x container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.878884 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" podUID="5c56d4fe-62c7-47ef-9a0f-607d899d19b8" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.935401 4985 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.935470 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="664a7afe-25ae-45f8-81bd-9a9c59c431cd" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.963856 4985 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-2755m container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.963914 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" podUID="effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.038347 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:8081/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.038403 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.038369 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.038717 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.107621 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:8081/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.107695 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.108319 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:8083/live\": context deadline exceeded" start-of-body= Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.108336 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/live\": context deadline exceeded" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.716693 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.717084 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.716743 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.717191 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.990649 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" podUID="7ef21481-ade5-436a-ae3a-f284a7e438d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.072528 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" podUID="99893bb5-33ef-4159-bf8f-1c79a58e74d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.155495 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" podUID="7ef21481-ade5-436a-ae3a-f284a7e438d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.155546 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.155910 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" podUID="99893bb5-33ef-4159-bf8f-1c79a58e74d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.197538 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.197573 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.280470 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" podUID="99b88683-3e0a-4afa-91ab-71feac27fba1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.280913 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" podUID="99b88683-3e0a-4afa-91ab-71feac27fba1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.436430 4985 patch_prober.go:28] interesting pod/console-74779d9b4-2xxwx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.436481 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" podUID="75e682e9-e5a5-47f1-83cc-c8004ebe224a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.437104 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-74779d9b4-2xxwx" podUID="6b348b0a-4b9a-4216-adbf-02bcefe1f011" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.519498 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.519598 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.525631 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.602596 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" podUID="b5a0c28d-1434-40f0-8759-d76b65dc2c30" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.602724 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.602770 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.602809 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.602624 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" podUID="75e682e9-e5a5-47f1-83cc-c8004ebe224a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.767651 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.850455 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.850535 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" podUID="b5a0c28d-1434-40f0-8759-d76b65dc2c30" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.850745 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" podUID="367b6525-0367-437a-9fe3-b2007411f4af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.850831 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" podUID="367b6525-0367-437a-9fe3-b2007411f4af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.850869 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.933655 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" podUID="9c7284ab-b40f-4275-b85e-77aebd660135" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.934138 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" podUID="9c7284ab-b40f-4275-b85e-77aebd660135" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.934184 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.043410 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.091562 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.132431 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.132469 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.132507 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.132515 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.132431 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.138717 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.139054 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.138772 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.139298 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.226739 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.235985 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"9ff56c9523f5bafd270d42d2d854367fe80b33c8d2f772d856a6ab4876f1fa48"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.251820 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" containerID="cri-o://9ff56c9523f5bafd270d42d2d854367fe80b33c8d2f772d856a6ab4876f1fa48" gracePeriod=30 Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.267472 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" podUID="c95374e8-7d41-4a49-add9-7f28196d70eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.309515 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.309625 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" podUID="c95374e8-7d41-4a49-add9-7f28196d70eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.473861 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.494172 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.494443 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.555562 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.555637 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podUID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.555663 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podUID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.555699 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.555495 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.555881 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podUID="99828525-9397-448d-9a51-bc0da88038ac" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.735272 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.735297 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.956292 4985 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-v2hv6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.956378 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" podUID="c731b198-314f-46a9-ad13-a4cc6c7bab94" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.997544 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podUID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.192484 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.283593 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" podUID="c77a825c-f720-48a7-b74f-49b16e3ecbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.703490 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.703535 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.703545 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podUID="99828525-9397-448d-9a51-bc0da88038ac" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.737328 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.737400 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.737489 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.737403 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.849870 4985 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-pcb4d container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.850171 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" podUID="be08d23e-d6c9-4b42-904b-c36b05dfc316" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.038001 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.038068 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.038139 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.038076 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.106584 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.106626 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.106666 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.106690 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.299525 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podUID="c1e8524e-e047-4872-9ee1-ae4e013f8825" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.544467 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.544530 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.626489 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.626507 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.626589 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.626616 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.626534 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.708990 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.709076 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.709344 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.709404 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.711466 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.711511 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.734462 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-gkjzc" podUID="8f0319d2-9602-42b4-a3fb-c53bf5d3c244" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.735411 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.924363 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.924424 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.924580 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": context deadline exceeded" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.924735 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": context deadline exceeded" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.168516 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.168631 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.328010 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.328072 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.328527 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.329051 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.373338 4985 patch_prober.go:28] interesting pod/thanos-querier-5695687f7c-8tcz2 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.373404 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podUID="1a0dd00c-a59d-4e21-968c-b1a7b1198758" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630420 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630443 4985 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-pdwpf container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630476 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630505 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" podUID="893bf4c0-7b07-4e49-bff4-9ed7d52b3196" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630547 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630617 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630627 4985 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-pdwpf container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630692 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" podUID="893bf4c0-7b07-4e49-bff4-9ed7d52b3196" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712393 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712431 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712454 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712488 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712511 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712558 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712569 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712590 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.735795 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.814845 4985 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.814913 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.869630 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podUID="99828525-9397-448d-9a51-bc0da88038ac" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.052424 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.112454 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.196444 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.196541 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.237454 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.237536 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.248936 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.286232 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:04:12 crc kubenswrapper[4985]: E0128 20:04:12.288864 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.365455 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.365525 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.731224 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.731741 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.736327 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-mclkd" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.736390 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mclkd" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.736424 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-5whpv" podUID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.736572 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5whpv" podUID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.332890 4985 patch_prober.go:28] interesting pod/loki-operator-controller-manager-85fc96dbd6-9qljj container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.333061 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podUID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.373492 4985 patch_prober.go:28] interesting pod/loki-operator-controller-manager-85fc96dbd6-9qljj container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.373497 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.373541 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podUID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.501484 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.501692 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.501809 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.679458 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" podUID="70329607-4bbe-43ad-bb7a-2b62f26af473" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.731415 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.733167 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.733707 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.733879 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.733962 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.997006 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.254:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.997054 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.254:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.236326 4985 patch_prober.go:28] interesting pod/apiserver-76f77b778f-2wxf2 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.236397 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" podUID="ebf5f82e-2a14-49d9-b670-59ed73e71203" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.348673 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.348750 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.502408 4985 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.502483 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.628069 4985 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-jrf9w container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.628163 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" podUID="645ec0ef-97a6-4e2f-b691-ffcbcab4eed7" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.733527 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" podUID="5eaf2e7f-83ab-438b-8de3-75886a97ada4" containerName="nbdb" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.735341 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" podUID="5eaf2e7f-83ab-438b-8de3-75886a97ada4" containerName="sbdb" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.735399 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.735609 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.735801 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.735803 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.735939 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.736236 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.768649 4985 patch_prober.go:28] interesting pod/metrics-server-6845d579bb-9lznf container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.768714 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podUID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.768775 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.771732 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"7dd77068bf3eb2a91485c6b77d6e558f0ea9cb261db063d16cb699f2d789cd1d"} pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" containerMessage="Container metrics-server failed liveness probe, will be restarted" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.774296 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podUID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerName="metrics-server" containerID="cri-o://7dd77068bf3eb2a91485c6b77d6e558f0ea9cb261db063d16cb699f2d789cd1d" gracePeriod=170 Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.795128 4985 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-dkn9m container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.795196 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" podUID="21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.879280 4985 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-pcd6x container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.879355 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" podUID="5c56d4fe-62c7-47ef-9a0f-607d899d19b8" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.963012 4985 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-2755m container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.963076 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" podUID="effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.051701 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.052207 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.051738 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.052331 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.106463 4985 patch_prober.go:28] interesting pod/monitoring-plugin-868c9846bf-6bwkl container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.106532 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" podUID="54abc3c0-c9d2-49a3-bc29-854369637b99" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.106620 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.107899 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.107972 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.107909 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.108080 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.364930 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.364995 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.545725 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.545807 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.545742 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.545865 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.715440 4985 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-j7z4h container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.715502 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.715440 4985 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-j7z4h container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.715550 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.753432 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-gkjzc" podUID="8f0319d2-9602-42b4-a3fb-c53bf5d3c244" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.756438 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podUID="99828525-9397-448d-9a51-bc0da88038ac" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.43:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.819082 4985 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.819141 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e322915e-933c-4de4-98dd-ef047ee5b056" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.860970 4985 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.861031 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="ac72f54d-936d-4c98-9f91-918f7a05b5d1" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.932894 4985 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.933237 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="664a7afe-25ae-45f8-81bd-9a9c59c431cd" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.108519 4985 patch_prober.go:28] interesting pod/monitoring-plugin-868c9846bf-6bwkl container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.108945 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" podUID="54abc3c0-c9d2-49a3-bc29-854369637b99" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.372397 4985 patch_prober.go:28] interesting pod/thanos-querier-5695687f7c-8tcz2 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.372487 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podUID="1a0dd00c-a59d-4e21-968c-b1a7b1198758" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.503268 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.716466 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.716499 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.716553 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.716574 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.963469 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" podUID="4fa1b302-aad3-4e6e-9cd2-bba65262c1e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.963559 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" podUID="7ef21481-ade5-436a-ae3a-f284a7e438d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.005540 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.005598 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.163593 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.164310 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" podUID="4dfb4621-d061-4224-8aee-840726565aa3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.204416 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" podUID="99893bb5-33ef-4159-bf8f-1c79a58e74d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.204426 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.246726 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" podUID="99b88683-3e0a-4afa-91ab-71feac27fba1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.252345 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podUID="99828525-9397-448d-9a51-bc0da88038ac" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.252443 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.254360 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="hostpath-provisioner" containerStatusID={"Type":"cri-o","ID":"eedf56963284f4f02b309064398b6a7be6c00026bb391ec849a54c864758f409"} pod="hostpath-provisioner/csi-hostpathplugin-5zj27" containerMessage="Container hostpath-provisioner failed liveness probe, will be restarted" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.258607 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podUID="99828525-9397-448d-9a51-bc0da88038ac" containerName="hostpath-provisioner" containerID="cri-o://eedf56963284f4f02b309064398b6a7be6c00026bb391ec849a54c864758f409" gracePeriod=30 Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.372461 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.372514 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.372537 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.372581 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.414484 4985 patch_prober.go:28] interesting pod/console-74779d9b4-2xxwx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.414568 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-74779d9b4-2xxwx" podUID="6b348b0a-4b9a-4216-adbf-02bcefe1f011" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.414680 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.416107 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="841350c5-b9e8-4331-9282-e129f8152153" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.209:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.498538 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" podUID="654a2c56-81a7-4b32-ad1d-c4d60b054b47" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.539569 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" podUID="b5a0c28d-1434-40f0-8759-d76b65dc2c30" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.539846 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" podUID="75e682e9-e5a5-47f1-83cc-c8004ebe224a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.573078 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": context deadline exceeded" start-of-body= Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.573186 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": context deadline exceeded" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.621554 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.621680 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.662528 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.662546 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" podUID="367b6525-0367-437a-9fe3-b2007411f4af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.662959 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.734319 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" podUID="45d84233-dc44-4b3c-8aaa-f08ab50c0512" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.734319 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.735890 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" podUID="45d84233-dc44-4b3c-8aaa-f08ab50c0512" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.736008 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.736057 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.741672 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"c6e66f05a0d16e3fe2371e96f9a7cf894276603fbbf1aac905bd7a1b74d22b3b"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.741789 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" containerName="ceilometer-central-agent" containerID="cri-o://c6e66f05a0d16e3fe2371e96f9a7cf894276603fbbf1aac905bd7a1b74d22b3b" gracePeriod=30 Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.772324 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" podUID="9c7284ab-b40f-4275-b85e-77aebd660135" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.772471 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.935641 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.092513 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.105400 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.105473 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.105423 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.105534 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.105563 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.126864 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"03338a45259e63ff86a5b162e1f76627fc9bb12f10aaf142f4c25f67a1bbfd5c"} pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" containerMessage="Container controller-manager failed liveness probe, will be restarted" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.126936 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" containerID="cri-o://03338a45259e63ff86a5b162e1f76627fc9bb12f10aaf142f4c25f67a1bbfd5c" gracePeriod=30 Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.137269 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.137329 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.137373 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.137830 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.137896 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.145032 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"4c2347925908cece1c999f90b8a277d5f7b9d3d6eceb91e039c8ca2437637fea"} pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.145097 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" containerID="cri-o://4c2347925908cece1c999f90b8a277d5f7b9d3d6eceb91e039c8ca2437637fea" gracePeriod=30 Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.185443 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.185590 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.226517 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" podUID="c95374e8-7d41-4a49-add9-7f28196d70eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.334495 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podUID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.375490 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.375666 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.416713 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.440689 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.440771 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.440852 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.494527 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.664500 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.705448 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.735046 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-f287q" podUID="2c181f14-26b7-49f4-9ae0-869d9b291938" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.735400 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.735645 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-f287q" podUID="2c181f14-26b7-49f4-9ae0-869d9b291938" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.737678 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-f287q" podUID="2c181f14-26b7-49f4-9ae0-869d9b291938" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.751522 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-f287q" podUID="2c181f14-26b7-49f4-9ae0-869d9b291938" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.815420 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" podUID="9c7284ab-b40f-4275-b85e-77aebd660135" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.955059 4985 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-v2hv6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.17:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.955445 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" podUID="c731b198-314f-46a9-ad13-a4cc6c7bab94" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.17:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.957280 4985 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-v2hv6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.957346 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" podUID="c731b198-314f-46a9-ad13-a4cc6c7bab94" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.039479 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podUID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.039583 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.039594 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podUID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.039698 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.233528 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.233683 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.234323 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.235018 4985 patch_prober.go:28] interesting pod/apiserver-76f77b778f-2wxf2 container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.235052 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" podUID="ebf5f82e-2a14-49d9-b670-59ed73e71203" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.30:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.281398 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" podUID="c77a825c-f720-48a7-b74f-49b16e3ecbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.361116 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.447240 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.504290 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.703434 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.703459 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.703529 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.703630 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.733157 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"fdd72e77cc726ca0a1a4cf7375eda691bbda1220dee69172ff1e5101d96bbeae"} pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" containerMessage="Container webhook-server failed liveness probe, will be restarted" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.733240 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" containerID="cri-o://fdd72e77cc726ca0a1a4cf7375eda691bbda1220dee69172ff1e5101d96bbeae" gracePeriod=2 Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.737007 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.737067 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.737065 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.737185 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.737269 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.737373 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.757218 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"555b2897b605937380ab9cdf98df1b3029b5fd9c1370b8b411db0cd55c5d3b47"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.757316 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" containerID="cri-o://555b2897b605937380ab9cdf98df1b3029b5fd9c1370b8b411db0cd55c5d3b47" gracePeriod=30 Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.849593 4985 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-pcb4d container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.849655 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" podUID="be08d23e-d6c9-4b42-904b-c36b05dfc316" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.849701 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.861236 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"9cef7e212ac2841b128f86d6ec36fe2a3490809adf860dd313b564257c0ad99b"} pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.861488 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" podUID="be08d23e-d6c9-4b42-904b-c36b05dfc316" containerName="authentication-operator" containerID="cri-o://9cef7e212ac2841b128f86d6ec36fe2a3490809adf860dd313b564257c0ad99b" gracePeriod=30 Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.037777 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.037852 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.037802 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.037910 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.081548 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podUID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.106956 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.107027 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.107039 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.107096 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.382595 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.383115 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podUID="c1e8524e-e047-4872-9ee1-ae4e013f8825" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.383224 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.385200 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podUID="c1e8524e-e047-4872-9ee1-ae4e013f8825" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.629556 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.629599 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" podUID="c77a825c-f720-48a7-b74f-49b16e3ecbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.629892 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.629937 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.631123 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.631802 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.631841 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.633113 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"a4a0bf327889a8b202f093668303cbe6c4dcf67ff2cf6693d3a23fd9a88737e1"} pod="metallb-system/frr-k8s-qlsnv" containerMessage="Container controller failed liveness probe, will be restarted" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.633164 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"4f6591d0d275d0078b49f74da8009d5d995a9740fb3846677a55a9876831fac8"} pod="metallb-system/frr-k8s-qlsnv" containerMessage="Container frr failed liveness probe, will be restarted" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.661376 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" containerID="cri-o://a4a0bf327889a8b202f093668303cbe6c4dcf67ff2cf6693d3a23fd9a88737e1" gracePeriod=2 Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.711485 4985 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-77p8r container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.711562 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" podUID="69277fd0-66c2-4094-87fd-eaa80e756e75" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.711582 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.711636 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.711674 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.711705 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.712437 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.712464 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.712489 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.712571 4985 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-77p8r container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.712618 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" podUID="69277fd0-66c2-4094-87fd-eaa80e756e75" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.728685 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"35166b582511c0cb6470e0cf1786001c7eb41cdc45c00f7f9d0384210b660de5"} pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.728753 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" containerID="cri-o://35166b582511c0cb6470e0cf1786001c7eb41cdc45c00f7f9d0384210b660de5" gracePeriod=10 Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.733487 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"dcd1b7b2c9b099a64b97b202bb9f7fd3e0b1bcb3e84ef11fdc826b0963e66089"} pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.733547 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" containerID="cri-o://dcd1b7b2c9b099a64b97b202bb9f7fd3e0b1bcb3e84ef11fdc826b0963e66089" gracePeriod=30 Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.733620 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-gkjzc" podUID="8f0319d2-9602-42b4-a3fb-c53bf5d3c244" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.733773 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.740475 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.753390 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.753475 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.753593 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.753406 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.753806 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.753831 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.937201 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.937278 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.937354 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.938282 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.938327 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.938368 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.952149 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"08a0795107d17d55b403752643a479ee0f629b233d8b8ff0a9ced0a20942f05d"} pod="openshift-console-operator/console-operator-58897d9998-j6799" containerMessage="Container console-operator failed liveness probe, will be restarted" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.952230 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" containerID="cri-o://08a0795107d17d55b403752643a479ee0f629b233d8b8ff0a9ced0a20942f05d" gracePeriod=30 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.167331 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.167384 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.167436 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.167516 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.171768 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"32a03f53581016e8458cfcf2986dfe26e5246f2793c884a5203a887cdeefb6c8"} pod="metallb-system/controller-6968d8fdc4-8f79k" containerMessage="Container controller failed liveness probe, will be restarted" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.172060 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" containerID="cri-o://32a03f53581016e8458cfcf2986dfe26e5246f2793c884a5203a887cdeefb6c8" gracePeriod=2 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.336924 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.336977 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.337015 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.337214 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.337288 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.337528 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.338467 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"f5ff21eae212661230e0f400cfd444bde35cb9b2316c59ec3f7a4c7fa2274b70"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" containerMessage="Container olm-operator failed liveness probe, will be restarted" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.338509 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" containerID="cri-o://f5ff21eae212661230e0f400cfd444bde35cb9b2316c59ec3f7a4c7fa2274b70" gracePeriod=30 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.364879 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.365107 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.373329 4985 patch_prober.go:28] interesting pod/thanos-querier-5695687f7c-8tcz2 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.373387 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podUID="1a0dd00c-a59d-4e21-968c-b1a7b1198758" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.467530 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podUID="c1e8524e-e047-4872-9ee1-ae4e013f8825" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.620562 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.631219 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.631332 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.631464 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632300 4985 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-pdwpf container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632366 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" podUID="893bf4c0-7b07-4e49-bff4-9ed7d52b3196" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632429 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632448 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632482 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632516 4985 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-pdwpf container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632536 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" podUID="893bf4c0-7b07-4e49-bff4-9ed7d52b3196" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.636932 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"6af011f55a64374575ea0cae6d33d823b0facc6e20d048b8a1587919c0634929"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.636989 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" containerID="cri-o://6af011f55a64374575ea0cae6d33d823b0facc6e20d048b8a1587919c0634929" gracePeriod=30 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.714447 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.714523 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.714587 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.724861 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"8451ecb74d3c5ee99cec821aaa47c7970df959ecd8df15b6c7cf52a433376f5a"} pod="openshift-ingress/router-default-5444994796-qnrsp" containerMessage="Container router failed liveness probe, will be restarted" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.724930 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" containerID="cri-o://8451ecb74d3c5ee99cec821aaa47c7970df959ecd8df15b6c7cf52a433376f5a" gracePeriod=10 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.732479 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756447 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756502 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756558 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756584 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756652 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756601 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756757 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756619 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756797 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756862 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.758329 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"d717b3927ce83af8ba73330be9f868092fe0fdbdd83aacdbcf2ed308742ebd23"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.758372 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" containerID="cri-o://d717b3927ce83af8ba73330be9f868092fe0fdbdd83aacdbcf2ed308742ebd23" gracePeriod=30 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.798465 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.815515 4985 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.815580 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.815660 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.842196 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.842303 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c" gracePeriod=30 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.938776 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.938843 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.047414 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.047499 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-6lq6d" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.047443 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.047658 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6lq6d" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.049620 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"7e9f8feabc8f90d4cc467e5a3a22c744a7cb51080d65e7cc9ae61b59a79f0089"} pod="metallb-system/speaker-6lq6d" containerMessage="Container speaker failed liveness probe, will be restarted" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.049769 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" containerID="cri-o://7e9f8feabc8f90d4cc467e5a3a22c744a7cb51080d65e7cc9ae61b59a79f0089" gracePeriod=2 Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.192448 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.192460 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.192570 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.208245 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cert-manager-webhook" containerStatusID={"Type":"cri-o","ID":"efcdb5995ad8535fb26c939596ae0288fe4108bc695625292cdb108a91bd2093"} pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" containerMessage="Container cert-manager-webhook failed liveness probe, will be restarted" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.208337 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" containerID="cri-o://efcdb5995ad8535fb26c939596ae0288fe4108bc695625292cdb108a91bd2093" gracePeriod=30 Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.233489 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.289006 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.289161 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.634337 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.634412 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.674124 4985 trace.go:236] Trace[1202791139]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (28-Jan-2026 20:04:14.471) (total time: 8182ms): Jan 28 20:04:22 crc kubenswrapper[4985]: Trace[1202791139]: [8.182339713s] [8.182339713s] END Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.674144 4985 trace.go:236] Trace[830585391]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (28-Jan-2026 20:04:13.772) (total time: 8882ms): Jan 28 20:04:22 crc kubenswrapper[4985]: Trace[830585391]: [8.882286216s] [8.882286216s] END Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.674128 4985 trace.go:236] Trace[511881909]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-ingester-0" (28-Jan-2026 20:04:12.851) (total time: 9817ms): Jan 28 20:04:22 crc kubenswrapper[4985]: Trace[511881909]: [9.817961972s] [9.817961972s] END Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.674144 4985 trace.go:236] Trace[1249959105]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (28-Jan-2026 20:04:20.743) (total time: 1910ms): Jan 28 20:04:22 crc kubenswrapper[4985]: Trace[1249959105]: [1.910528215s] [1.910528215s] END Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.674119 4985 trace.go:236] Trace[1597181343]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (28-Jan-2026 20:04:19.471) (total time: 3183ms): Jan 28 20:04:22 crc kubenswrapper[4985]: Trace[1597181343]: [3.183068984s] [3.183068984s] END Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.731002 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.731542 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.731571 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.731654 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.732512 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.735615 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mclkd" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.735822 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-5whpv" podUID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.737289 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-mclkd" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.745173 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.750088 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5whpv" podUID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.757282 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.757340 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.089439 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.234449 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.359576 4985 patch_prober.go:28] interesting pod/loki-operator-controller-manager-85fc96dbd6-9qljj container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.359852 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podUID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.359985 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.493957 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.493994 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.678742 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" podUID="70329607-4bbe-43ad-bb7a-2b62f26af473" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.720450 4985 patch_prober.go:28] interesting pod/loki-operator-controller-manager-85fc96dbd6-9qljj container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": read tcp 10.217.0.2:59058->10.217.0.48:8081: read: connection reset by peer" start-of-body= Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.720492 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" podUID="70329607-4bbe-43ad-bb7a-2b62f26af473" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.720529 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podUID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": read tcp 10.217.0.2:59058->10.217.0.48:8081: read: connection reset by peer" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.720635 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.771736 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.771922 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.773948 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.775400 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.775519 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.788406 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.880656 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerDied","Data":"a4a0bf327889a8b202f093668303cbe6c4dcf67ff2cf6693d3a23fd9a88737e1"} Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.882076 4985 generic.go:334] "Generic (PLEG): container finished" podID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerID="a4a0bf327889a8b202f093668303cbe6c4dcf67ff2cf6693d3a23fd9a88737e1" exitCode=137 Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.888037 4985 generic.go:334] "Generic (PLEG): container finished" podID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerID="555b2897b605937380ab9cdf98df1b3029b5fd9c1370b8b411db0cd55c5d3b47" exitCode=0 Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.888145 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" event={"ID":"81fa949b-5c24-44da-aa29-bd34bcc39d6e","Type":"ContainerDied","Data":"555b2897b605937380ab9cdf98df1b3029b5fd9c1370b8b411db0cd55c5d3b47"} Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.890663 4985 generic.go:334] "Generic (PLEG): container finished" podID="57ef54a5-9891-4f69-9907-b726d30d4006" containerID="fdd72e77cc726ca0a1a4cf7375eda691bbda1220dee69172ff1e5101d96bbeae" exitCode=137 Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.890717 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" event={"ID":"57ef54a5-9891-4f69-9907-b726d30d4006","Type":"ContainerDied","Data":"fdd72e77cc726ca0a1a4cf7375eda691bbda1220dee69172ff1e5101d96bbeae"} Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.895557 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.998202 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.254:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.998316 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.034559 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"dc0252c56541e6e97a4f6129007afca9a4dd9402da5c84c55d3d31fd8c345908"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.034640 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" containerID="cri-o://dc0252c56541e6e97a4f6129007afca9a4dd9402da5c84c55d3d31fd8c345908" gracePeriod=30 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.214247 4985 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=failed to establish etcd client: giving up getting a cached client after 3 tries Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.214352 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.216034 4985 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=failed to establish etcd client: giving up getting a cached client after 3 tries Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.216108 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.237511 4985 patch_prober.go:28] interesting pod/apiserver-76f77b778f-2wxf2 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.237595 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" podUID="ebf5f82e-2a14-49d9-b670-59ed73e71203" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.240531 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="frr" containerID="cri-o://4f6591d0d275d0078b49f74da8009d5d995a9740fb3846677a55a9876831fac8" gracePeriod=2 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.349354 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.349434 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.365463 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.366133 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.498035 4985 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.498105 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.498188 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.627656 4985 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-jrf9w container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.627787 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" podUID="645ec0ef-97a6-4e2f-b691-ffcbcab4eed7" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.627899 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.732086 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.733845 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.733909 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.734440 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.734614 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.735057 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736147 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736222 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736390 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736483 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736525 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736719 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736732 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736996 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.746281 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9"} pod="openshift-marketplace/redhat-marketplace-4fx27" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.746352 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" containerID="cri-o://f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9" gracePeriod=30 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.749297 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a"} pod="openstack-operators/openstack-operator-index-wnjfp" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.749370 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" containerID="cri-o://a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a" gracePeriod=30 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.751500 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c"} pod="openshift-marketplace/community-operators-z2xq5" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.751705 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" containerID="cri-o://acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c" gracePeriod=30 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.767307 4985 patch_prober.go:28] interesting pod/metrics-server-6845d579bb-9lznf container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.767364 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podUID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.793874 4985 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-dkn9m container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.793944 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" podUID="21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.794051 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.879052 4985 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-pcd6x container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.879678 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" podUID="5c56d4fe-62c7-47ef-9a0f-607d899d19b8" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.879863 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.906721 4985 generic.go:334] "Generic (PLEG): container finished" podID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerID="dc0252c56541e6e97a4f6129007afca9a4dd9402da5c84c55d3d31fd8c345908" exitCode=2 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.906809 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e","Type":"ContainerDied","Data":"dc0252c56541e6e97a4f6129007afca9a4dd9402da5c84c55d3d31fd8c345908"} Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.911801 4985 generic.go:334] "Generic (PLEG): container finished" podID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerID="4f6591d0d275d0078b49f74da8009d5d995a9740fb3846677a55a9876831fac8" exitCode=143 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.911862 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerDied","Data":"4f6591d0d275d0078b49f74da8009d5d995a9740fb3846677a55a9876831fac8"} Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.963724 4985 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-2755m container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.963789 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" podUID="effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.963870 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:24.984163 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8f79k" event={"ID":"5fd77adb-e801-4d3f-ac61-64615952aebd","Type":"ContainerDied","Data":"32a03f53581016e8458cfcf2986dfe26e5246f2793c884a5203a887cdeefb6c8"} Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:24.984172 4985 generic.go:334] "Generic (PLEG): container finished" podID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerID="32a03f53581016e8458cfcf2986dfe26e5246f2793c884a5203a887cdeefb6c8" exitCode=137 Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:24.991837 4985 generic.go:334] "Generic (PLEG): container finished" podID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerID="e91c414e4bddd6fb7b100b376f20e51c053f866b5e844a819f4081df4b77080f" exitCode=1 Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:24.991950 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" event={"ID":"fc080bc5-4b4f-4405-b458-7450aaf8714b","Type":"ContainerDied","Data":"e91c414e4bddd6fb7b100b376f20e51c053f866b5e844a819f4081df4b77080f"} Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.038846 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.038935 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.106027 4985 patch_prober.go:28] interesting pod/monitoring-plugin-868c9846bf-6bwkl container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.106097 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" podUID="54abc3c0-c9d2-49a3-bc29-854369637b99" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.106486 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.106570 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.188996 4985 scope.go:117] "RemoveContainer" containerID="e91c414e4bddd6fb7b100b376f20e51c053f866b5e844a819f4081df4b77080f" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.296373 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:04:25 crc kubenswrapper[4985]: E0128 20:04:25.326672 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.545976 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.546041 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.546173 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.547787 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.548182 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.548241 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.548440 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.600613 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.601173 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.601575 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.673389 4985 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-j7z4h container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.673448 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.673522 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.734677 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.734805 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 28 20:04:25 crc kubenswrapper[4985]: E0128 20:04:25.783374 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26777afd_4d9f_4ebb_b8ed_0be018fa5a17.slice/crio-efcdb5995ad8535fb26c939596ae0288fe4108bc695625292cdb108a91bd2093.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70124ff4_00b0_41ef_947d_55eda7af02db.slice/crio-6af011f55a64374575ea0cae6d33d823b0facc6e20d048b8a1587919c0634929.scope\": RecentStats: unable to find data in memory cache]" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.841871 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.859653 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.012094 4985 generic.go:334] "Generic (PLEG): container finished" podID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerID="efcdb5995ad8535fb26c939596ae0288fe4108bc695625292cdb108a91bd2093" exitCode=0 Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.012276 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" event={"ID":"26777afd-4d9f-4ebb-b8ed-0be018fa5a17","Type":"ContainerDied","Data":"efcdb5995ad8535fb26c939596ae0288fe4108bc695625292cdb108a91bd2093"} Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.015156 4985 generic.go:334] "Generic (PLEG): container finished" podID="70124ff4-00b0-41ef-947d-55eda7af02db" containerID="6af011f55a64374575ea0cae6d33d823b0facc6e20d048b8a1587919c0634929" exitCode=0 Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.015270 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" event={"ID":"70124ff4-00b0-41ef-947d-55eda7af02db","Type":"ContainerDied","Data":"6af011f55a64374575ea0cae6d33d823b0facc6e20d048b8a1587919c0634929"} Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.017632 4985 generic.go:334] "Generic (PLEG): container finished" podID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerID="7e9f8feabc8f90d4cc467e5a3a22c744a7cb51080d65e7cc9ae61b59a79f0089" exitCode=137 Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.017740 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lq6d" event={"ID":"b5094b56-07e5-45db-8a13-ce7b931b861e","Type":"ContainerDied","Data":"7e9f8feabc8f90d4cc467e5a3a22c744a7cb51080d65e7cc9ae61b59a79f0089"} Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.019554 4985 generic.go:334] "Generic (PLEG): container finished" podID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerID="d717b3927ce83af8ba73330be9f868092fe0fdbdd83aacdbcf2ed308742ebd23" exitCode=0 Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.019615 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" event={"ID":"cae1c988-06ab-4748-a62d-5bd7301b2c8d","Type":"ContainerDied","Data":"d717b3927ce83af8ba73330be9f868092fe0fdbdd83aacdbcf2ed308742ebd23"} Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.097990 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-j6799_db632812-bc0d-41f2-9c01-a19d40eb69be/console-operator/0.log" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.098050 4985 generic.go:334] "Generic (PLEG): container finished" podID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerID="08a0795107d17d55b403752643a479ee0f629b233d8b8ff0a9ced0a20942f05d" exitCode=1 Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.098193 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j6799" event={"ID":"db632812-bc0d-41f2-9c01-a19d40eb69be","Type":"ContainerDied","Data":"08a0795107d17d55b403752643a479ee0f629b233d8b8ff0a9ced0a20942f05d"} Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.098842 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"22bb6e2fff06e8c5d79d9d6c748a0ba6b6268071593344e6ef0465f43decebdd"} pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" containerMessage="Container operator failed liveness probe, will be restarted" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.098881 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" containerID="cri-o://22bb6e2fff06e8c5d79d9d6c748a0ba6b6268071593344e6ef0465f43decebdd" gracePeriod=30 Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.108024 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": dial tcp 10.217.0.44:6080: connect: connection refused" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.387416 4985 patch_prober.go:28] interesting pod/thanos-querier-5695687f7c-8tcz2 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.387843 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podUID="1a0dd00c-a59d-4e21-968c-b1a7b1198758" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.550787 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.798418 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.798475 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.798767 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.798811 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.798889 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.798968 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.800091 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"47b2958f11c39ade31c2e91339ddcd95d53ee549c27d8c34ef46c24ef5c02a95"} pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.800533 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.965901 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" podUID="4fa1b302-aad3-4e6e-9cd2-bba65262c1e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.048446 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" podUID="7ef21481-ade5-436a-ae3a-f284a7e438d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.048464 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" podUID="4fa1b302-aad3-4e6e-9cd2-bba65262c1e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.113419 4985 generic.go:334] "Generic (PLEG): container finished" podID="1310770f-7cb7-4874-b2a0-4ef733911716" containerID="6e92c8c3af43ff2712b0f8ed60df9fc8862bc534e5395b1207bb47f744084f5b" exitCode=1 Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.113491 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" event={"ID":"1310770f-7cb7-4874-b2a0-4ef733911716","Type":"ContainerDied","Data":"6e92c8c3af43ff2712b0f8ed60df9fc8862bc534e5395b1207bb47f744084f5b"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.114674 4985 scope.go:117] "RemoveContainer" containerID="6e92c8c3af43ff2712b0f8ed60df9fc8862bc534e5395b1207bb47f744084f5b" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.124778 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" event={"ID":"cae1c988-06ab-4748-a62d-5bd7301b2c8d","Type":"ContainerStarted","Data":"0952d014831debce05e55414a932c95eac7cd0ff7fd38f0c9f8e18d35ab19dca"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.124930 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.128781 4985 generic.go:334] "Generic (PLEG): container finished" podID="70329607-4bbe-43ad-bb7a-2b62f26af473" containerID="b40c5de86bd5ee489a9235ce7345e2de0ac05a1a4eb0def7135cf083a63627f0" exitCode=1 Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.128840 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" event={"ID":"70329607-4bbe-43ad-bb7a-2b62f26af473","Type":"ContainerDied","Data":"b40c5de86bd5ee489a9235ce7345e2de0ac05a1a4eb0def7135cf083a63627f0"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.129628 4985 scope.go:117] "RemoveContainer" containerID="b40c5de86bd5ee489a9235ce7345e2de0ac05a1a4eb0def7135cf083a63627f0" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.130423 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.130545 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.134154 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" event={"ID":"57ef54a5-9891-4f69-9907-b726d30d4006","Type":"ContainerStarted","Data":"1482f4a5a51d8ed6befa36bf3f466f86f4bfceb1974e8d4d9ca30bdf3999b605"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.134769 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.137045 4985 generic.go:334] "Generic (PLEG): container finished" podID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" containerID="33e8754f74c0d539b6d740cc1480faa9b0b2b64b42c058d6a29292cd2a6ebd3c" exitCode=1 Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.137111 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" event={"ID":"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3","Type":"ContainerDied","Data":"33e8754f74c0d539b6d740cc1480faa9b0b2b64b42c058d6a29292cd2a6ebd3c"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.138265 4985 scope.go:117] "RemoveContainer" containerID="33e8754f74c0d539b6d740cc1480faa9b0b2b64b42c058d6a29292cd2a6ebd3c" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.140601 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.150882 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.150942 4985 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0025f144f3fa7cc81c86c1fe0e47ad15fbc5caa56b23b223f51fe0e0fd77569e" exitCode=1 Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.151076 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0025f144f3fa7cc81c86c1fe0e47ad15fbc5caa56b23b223f51fe0e0fd77569e"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.151175 4985 scope.go:117] "RemoveContainer" containerID="e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.152795 4985 scope.go:117] "RemoveContainer" containerID="0025f144f3fa7cc81c86c1fe0e47ad15fbc5caa56b23b223f51fe0e0fd77569e" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.155736 4985 generic.go:334] "Generic (PLEG): container finished" podID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerID="35166b582511c0cb6470e0cf1786001c7eb41cdc45c00f7f9d0384210b660de5" exitCode=0 Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.155804 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" event={"ID":"f6ebe169-8b20-4d94-99b7-96afffcb5118","Type":"ContainerDied","Data":"35166b582511c0cb6470e0cf1786001c7eb41cdc45c00f7f9d0384210b660de5"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.168862 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" event={"ID":"81fa949b-5c24-44da-aa29-bd34bcc39d6e","Type":"ContainerStarted","Data":"991dbcbdd632c9448a6c1e6c2ea946fb4562580affccd884b294119803a1706e"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.169061 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.171431 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" podUID="7ef21481-ade5-436a-ae3a-f284a7e438d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.171548 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.171449 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.171804 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.171958 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172404 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172509 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": dial tcp 10.217.0.117:8081: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172521 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172549 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172556 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": dial tcp 10.217.0.117:8081: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172593 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172625 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172638 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.173164 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": dial tcp 10.217.0.117:8081: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172441 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.178306 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.289038 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.289093 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.300573 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.300606 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.307791 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.327392 4985 patch_prober.go:28] interesting pod/console-74779d9b4-2xxwx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.327459 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-74779d9b4-2xxwx" podUID="6b348b0a-4b9a-4216-adbf-02bcefe1f011" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.365598 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.365651 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: E0128 20:04:27.444002 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.572430 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.594446 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.594529 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.616309 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.700619 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.732296 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.803430 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.803732 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.019799 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.033805 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 20:04:28 crc kubenswrapper[4985]: E0128 20:04:28.141972 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a is running failed: container process not found" containerID="a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:28 crc kubenswrapper[4985]: E0128 20:04:28.142377 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a is running failed: container process not found" containerID="a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:28 crc kubenswrapper[4985]: E0128 20:04:28.142885 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a is running failed: container process not found" containerID="a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:28 crc kubenswrapper[4985]: E0128 20:04:28.142933 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a is running failed: container process not found" probeType="Readiness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.214318 4985 generic.go:334] "Generic (PLEG): container finished" podID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerID="bff91fc4047ca8cb0c7f5c491bb739bdfbe2ef37ed14ecab78cbc847a02193b4" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.214420 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" event={"ID":"697da6ae-2950-468c-82e9-bcb1a1af61e7","Type":"ContainerDied","Data":"bff91fc4047ca8cb0c7f5c491bb739bdfbe2ef37ed14ecab78cbc847a02193b4"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.215483 4985 scope.go:117] "RemoveContainer" containerID="bff91fc4047ca8cb0c7f5c491bb739bdfbe2ef37ed14ecab78cbc847a02193b4" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.226777 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.227013 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.232940 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": dial tcp 10.217.0.101:8081: connect: connection refused" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.235376 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" event={"ID":"fc080bc5-4b4f-4405-b458-7450aaf8714b","Type":"ContainerStarted","Data":"40e683da5f6dfbf5eb0e698cbdf59d61756a5c2415678d0fa46c39dcbbf52f16"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.235618 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.242052 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" podUID="c77a825c-f720-48a7-b74f-49b16e3ecbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": dial tcp 10.217.0.94:8080: connect: connection refused" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.262929 4985 generic.go:334] "Generic (PLEG): container finished" podID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerID="8a3f19cb6aa7abaef144114e6dd8bdb0d9b95990c08eded3c8ad0a1adc11123e" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.262982 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" event={"ID":"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62","Type":"ContainerDied","Data":"8a3f19cb6aa7abaef144114e6dd8bdb0d9b95990c08eded3c8ad0a1adc11123e"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.264995 4985 scope.go:117] "RemoveContainer" containerID="8a3f19cb6aa7abaef144114e6dd8bdb0d9b95990c08eded3c8ad0a1adc11123e" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.277323 4985 generic.go:334] "Generic (PLEG): container finished" podID="c77a825c-f720-48a7-b74f-49b16e3ecbed" containerID="c7994e4e9289d830d3d2b83f6fe38b4798e6db43a7a5f82ef83d020e4a399d26" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.277413 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" event={"ID":"c77a825c-f720-48a7-b74f-49b16e3ecbed","Type":"ContainerDied","Data":"c7994e4e9289d830d3d2b83f6fe38b4798e6db43a7a5f82ef83d020e4a399d26"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.278166 4985 scope.go:117] "RemoveContainer" containerID="c7994e4e9289d830d3d2b83f6fe38b4798e6db43a7a5f82ef83d020e4a399d26" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.282520 4985 generic.go:334] "Generic (PLEG): container finished" podID="99b88683-3e0a-4afa-91ab-71feac27fba1" containerID="1929e793821573d3c1a565d61317bcfad5538b41e79ae8732d91df7c5e2173b2" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.282643 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" event={"ID":"99b88683-3e0a-4afa-91ab-71feac27fba1","Type":"ContainerDied","Data":"1929e793821573d3c1a565d61317bcfad5538b41e79ae8732d91df7c5e2173b2"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.284720 4985 scope.go:117] "RemoveContainer" containerID="1929e793821573d3c1a565d61317bcfad5538b41e79ae8732d91df7c5e2173b2" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.286891 4985 generic.go:334] "Generic (PLEG): container finished" podID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerID="acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c" exitCode=0 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.286959 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerDied","Data":"acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.292057 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8f79k" event={"ID":"5fd77adb-e801-4d3f-ac61-64615952aebd","Type":"ContainerStarted","Data":"f7d81ad6f3093a262aa8648649aa0c6f2729bd2194c460388848a6793da65337"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.295332 4985 generic.go:334] "Generic (PLEG): container finished" podID="d4d6e990-839d-4186-9382-1a67922556df" containerID="63ac9ba384926938b30ecfda1c6080eb12ddc04d1c11ca3a283a65a2c51b023d" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.295404 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" event={"ID":"d4d6e990-839d-4186-9382-1a67922556df","Type":"ContainerDied","Data":"63ac9ba384926938b30ecfda1c6080eb12ddc04d1c11ca3a283a65a2c51b023d"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.310021 4985 scope.go:117] "RemoveContainer" containerID="63ac9ba384926938b30ecfda1c6080eb12ddc04d1c11ca3a283a65a2c51b023d" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.313728 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"0a346d5d0650a73ed5f79fd8579ceb35d9e12fbd8bd81d25f6fc533d308cdac7"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.318411 4985 generic.go:334] "Generic (PLEG): container finished" podID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerID="f5ff21eae212661230e0f400cfd444bde35cb9b2316c59ec3f7a4c7fa2274b70" exitCode=0 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.318476 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" event={"ID":"fa42b50c-59ed-4523-a6a0-994a72ff7071","Type":"ContainerDied","Data":"f5ff21eae212661230e0f400cfd444bde35cb9b2316c59ec3f7a4c7fa2274b70"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.327160 4985 generic.go:334] "Generic (PLEG): container finished" podID="b5a0c28d-1434-40f0-8759-d76b65dc2c30" containerID="11f64e6924e35c8dac9934d956caaaa9c36e16ee58665f9b1149145a0715d500" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.327226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" event={"ID":"b5a0c28d-1434-40f0-8759-d76b65dc2c30","Type":"ContainerDied","Data":"11f64e6924e35c8dac9934d956caaaa9c36e16ee58665f9b1149145a0715d500"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.336603 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:04:28 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:04:28 crc kubenswrapper[4985]: > Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.336670 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.368674 4985 generic.go:334] "Generic (PLEG): container finished" podID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerID="f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9" exitCode=0 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.368763 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerDied","Data":"f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.372479 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"2557bb987631cc8664db3ca41a93039f004fa96ab105b36b4deb767b758e348c"} pod="openshift-marketplace/redhat-operators-spssk" containerMessage="Container registry-server failed startup probe, will be restarted" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.372526 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" containerID="cri-o://2557bb987631cc8664db3ca41a93039f004fa96ab105b36b4deb767b758e348c" gracePeriod=30 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.373364 4985 scope.go:117] "RemoveContainer" containerID="11f64e6924e35c8dac9934d956caaaa9c36e16ee58665f9b1149145a0715d500" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.374457 4985 generic.go:334] "Generic (PLEG): container finished" podID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerID="ff10dd6aec762e5c6f8ac00bc0e5212cc4c9ba6fe7bf3a0a1e2f0ca6c68d8b77" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.374574 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" event={"ID":"50682373-a3d7-491e-84a0-1d5613ee2e8a","Type":"ContainerDied","Data":"ff10dd6aec762e5c6f8ac00bc0e5212cc4c9ba6fe7bf3a0a1e2f0ca6c68d8b77"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.378411 4985 scope.go:117] "RemoveContainer" containerID="ff10dd6aec762e5c6f8ac00bc0e5212cc4c9ba6fe7bf3a0a1e2f0ca6c68d8b77" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.389859 4985 generic.go:334] "Generic (PLEG): container finished" podID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerID="22bb6e2fff06e8c5d79d9d6c748a0ba6b6268071593344e6ef0465f43decebdd" exitCode=0 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.391355 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" event={"ID":"a23ac89d-75e4-4511-afaa-ef9d6205a672","Type":"ContainerDied","Data":"22bb6e2fff06e8c5d79d9d6c748a0ba6b6268071593344e6ef0465f43decebdd"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.392063 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.392117 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.394471 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.394503 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.736816 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.737194 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.737807 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.737875 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.997302 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.254:8081/readyz\": dial tcp 10.217.0.254:8081: connect: connection refused" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.278500 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.339181 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.356337 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c is running failed: container process not found" containerID="acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.361022 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c is running failed: container process not found" containerID="acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.369364 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c is running failed: container process not found" containerID="acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.369414 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.373359 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": dial tcp 10.217.0.96:7572: connect: connection refused" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.440474 4985 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.440562 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.443916 4985 generic.go:334] "Generic (PLEG): container finished" podID="367b6525-0367-437a-9fe3-b2007411f4af" containerID="62135ee7a2eb606526c37bb8ddcd9bc19db80c6717a626f58c7287903e72ecfa" exitCode=1 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.443961 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" event={"ID":"367b6525-0367-437a-9fe3-b2007411f4af","Type":"ContainerDied","Data":"62135ee7a2eb606526c37bb8ddcd9bc19db80c6717a626f58c7287903e72ecfa"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.448011 4985 scope.go:117] "RemoveContainer" containerID="62135ee7a2eb606526c37bb8ddcd9bc19db80c6717a626f58c7287903e72ecfa" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.449119 4985 generic.go:334] "Generic (PLEG): container finished" podID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" containerID="b4af6b1594b7467f446e940a66763ef0f6b702bf026796c5550c43aad291ee7c" exitCode=1 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.449161 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" event={"ID":"cc7f29e1-e6e0-45a0-920a-4b18d8204c65","Type":"ContainerDied","Data":"b4af6b1594b7467f446e940a66763ef0f6b702bf026796c5550c43aad291ee7c"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.449484 4985 scope.go:117] "RemoveContainer" containerID="b4af6b1594b7467f446e940a66763ef0f6b702bf026796c5550c43aad291ee7c" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.482085 4985 generic.go:334] "Generic (PLEG): container finished" podID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" containerID="c6e66f05a0d16e3fe2371e96f9a7cf894276603fbbf1aac905bd7a1b74d22b3b" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.482147 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerDied","Data":"c6e66f05a0d16e3fe2371e96f9a7cf894276603fbbf1aac905bd7a1b74d22b3b"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.494599 4985 generic.go:334] "Generic (PLEG): container finished" podID="9c7284ab-b40f-4275-b85e-77aebd660135" containerID="ac9d4b13d281d4e9fb7fc67135b7b9665a8e3d5bfc5600b7571ded9088424b3d" exitCode=1 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.494695 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" event={"ID":"9c7284ab-b40f-4275-b85e-77aebd660135","Type":"ContainerDied","Data":"ac9d4b13d281d4e9fb7fc67135b7b9665a8e3d5bfc5600b7571ded9088424b3d"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.495448 4985 scope.go:117] "RemoveContainer" containerID="ac9d4b13d281d4e9fb7fc67135b7b9665a8e3d5bfc5600b7571ded9088424b3d" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.513730 4985 generic.go:334] "Generic (PLEG): container finished" podID="38846228-cec9-4a59-b9bb-c766121dacde" containerID="e3fa9329be40e8e7c004d6aea5bd6091de66c9c6bb481177d817723d553d5c05" exitCode=1 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.513829 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" event={"ID":"38846228-cec9-4a59-b9bb-c766121dacde","Type":"ContainerDied","Data":"e3fa9329be40e8e7c004d6aea5bd6091de66c9c6bb481177d817723d553d5c05"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.515680 4985 scope.go:117] "RemoveContainer" containerID="e3fa9329be40e8e7c004d6aea5bd6091de66c9c6bb481177d817723d553d5c05" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.524133 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.536123 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9 is running failed: container process not found" containerID="f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.542692 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9 is running failed: container process not found" containerID="f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.544068 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9 is running failed: container process not found" containerID="f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.544112 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.547793 4985 generic.go:334] "Generic (PLEG): container finished" podID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerID="dcd1b7b2c9b099a64b97b202bb9f7fd3e0b1bcb3e84ef11fdc826b0963e66089" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.547958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" event={"ID":"4845499d-139f-4839-9f9f-4d77c7f0ae37","Type":"ContainerDied","Data":"dcd1b7b2c9b099a64b97b202bb9f7fd3e0b1bcb3e84ef11fdc826b0963e66089"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.557528 4985 generic.go:334] "Generic (PLEG): container finished" podID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerID="a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.557755 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wnjfp" event={"ID":"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7","Type":"ContainerDied","Data":"a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.558490 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" start-of-body= Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.558532 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.564324 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" containerID="cri-o://c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" gracePeriod=25 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.564553 4985 generic.go:334] "Generic (PLEG): container finished" podID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerID="03338a45259e63ff86a5b162e1f76627fc9bb12f10aaf142f4c25f67a1bbfd5c" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.564596 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" event={"ID":"a0590b9a-abcc-4541-9914-675dc0ca1976","Type":"ContainerDied","Data":"03338a45259e63ff86a5b162e1f76627fc9bb12f10aaf142f4c25f67a1bbfd5c"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.567489 4985 generic.go:334] "Generic (PLEG): container finished" podID="be08d23e-d6c9-4b42-904b-c36b05dfc316" containerID="9cef7e212ac2841b128f86d6ec36fe2a3490809adf860dd313b564257c0ad99b" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.567526 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" event={"ID":"be08d23e-d6c9-4b42-904b-c36b05dfc316","Type":"ContainerDied","Data":"9cef7e212ac2841b128f86d6ec36fe2a3490809adf860dd313b564257c0ad99b"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.576576 4985 generic.go:334] "Generic (PLEG): container finished" podID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerID="4c2347925908cece1c999f90b8a277d5f7b9d3d6eceb91e039c8ca2437637fea" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.576879 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" event={"ID":"983beebe-f0c3-4fba-9861-0ea007559cc5","Type":"ContainerDied","Data":"4c2347925908cece1c999f90b8a277d5f7b9d3d6eceb91e039c8ca2437637fea"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.924184 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.924713 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.020512 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" containerID="cri-o://e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" gracePeriod=23 Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.328334 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.328408 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.364844 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.365281 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.582039 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.582113 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.587287 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" event={"ID":"91971c24-6187-432c-84ba-65dba69b4598","Type":"ContainerDied","Data":"9d2c97996374895a55b806ee971623886630ad28da6fcc1d054133f6f6157280"} Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.587221 4985 generic.go:334] "Generic (PLEG): container finished" podID="91971c24-6187-432c-84ba-65dba69b4598" containerID="9d2c97996374895a55b806ee971623886630ad28da6fcc1d054133f6f6157280" exitCode=1 Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.588318 4985 scope.go:117] "RemoveContainer" containerID="9d2c97996374895a55b806ee971623886630ad28da6fcc1d054133f6f6157280" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.590278 4985 generic.go:334] "Generic (PLEG): container finished" podID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerID="9ff56c9523f5bafd270d42d2d854367fe80b33c8d2f772d856a6ab4876f1fa48" exitCode=0 Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.590371 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" event={"ID":"715ad1e8-6659-4a18-a007-ad31ffa7044e","Type":"ContainerDied","Data":"9ff56c9523f5bafd270d42d2d854367fe80b33c8d2f772d856a6ab4876f1fa48"} Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.592887 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" event={"ID":"fa42b50c-59ed-4523-a6a0-994a72ff7071","Type":"ContainerStarted","Data":"e7d9191e6b961711762d840332431117287250aed579dab83322ef2d28ba23f5"} Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.627031 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Jan 28 20:04:30 crc kubenswrapper[4985]: [+]has-synced ok Jan 28 20:04:30 crc kubenswrapper[4985]: [-]process-running failed: reason withheld Jan 28 20:04:30 crc kubenswrapper[4985]: healthz check failed Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.627078 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.635477 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.635543 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.635660 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.635683 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.965465 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: E0128 20:04:31.078929 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:31 crc kubenswrapper[4985]: E0128 20:04:31.080161 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:31 crc kubenswrapper[4985]: E0128 20:04:31.081795 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:31 crc kubenswrapper[4985]: E0128 20:04:31.081835 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.108698 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": dial tcp 10.217.0.44:6080: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.610471 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" event={"ID":"c77a825c-f720-48a7-b74f-49b16e3ecbed","Type":"ContainerStarted","Data":"783df7ef6709d49ba1fdd15972f0559543c9194300844aff0682556076cd0e99"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.610973 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.615804 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7461c47253b22ccd04b9ecdb708f52301f9e2a05703634013c41a2bdbfa6b730"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.618470 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.621990 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lq6d" event={"ID":"b5094b56-07e5-45db-8a13-ce7b931b861e","Type":"ContainerStarted","Data":"6833045965b4db5f71a89941eb40c148c967fd6106d608b51de410b637f7ea88"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.622203 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6lq6d" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.624884 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" event={"ID":"70329607-4bbe-43ad-bb7a-2b62f26af473","Type":"ContainerStarted","Data":"707748125d7191c905a96f0931d8a59affa40e3297c907034f42d4fbc3b0e1de"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.625163 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.628697 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" event={"ID":"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3","Type":"ContainerStarted","Data":"a5abdb6d118d0f853fdfd9b16a03305d4c46560c14c141eca51313f158412064"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.629587 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.632542 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" event={"ID":"d4d6e990-839d-4186-9382-1a67922556df","Type":"ContainerStarted","Data":"78709126d809c26d97d48a9f4bf4e58061c28186d34472b7d635d7f358f177e2"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.639011 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.647558 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" event={"ID":"4845499d-139f-4839-9f9f-4d77c7f0ae37","Type":"ContainerStarted","Data":"9a6e4cf0fcfff4838a57e7153aaff862541a3bfd97e0a91bf0b7f364310d1fcb"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.648239 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.648407 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" start-of-body= Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.648449 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.651835 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" event={"ID":"70124ff4-00b0-41ef-947d-55eda7af02db","Type":"ContainerStarted","Data":"5f60dfe81d3f071462135af4af4128b52d2a308acb3162c63d4863d9a512f52f"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.651920 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.652171 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.652222 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.654980 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" event={"ID":"99b88683-3e0a-4afa-91ab-71feac27fba1","Type":"ContainerStarted","Data":"543e63830331d8d82aea0da0ca38f4216158dd9569b2059f39ed95de131ea709"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.655235 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.662661 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-j6799_db632812-bc0d-41f2-9c01-a19d40eb69be/console-operator/0.log" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.662909 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j6799" event={"ID":"db632812-bc0d-41f2-9c01-a19d40eb69be","Type":"ContainerStarted","Data":"c86ed9f518788a5f9945d537e318887017b2117f5135e704d75f7f724eb6d1f0"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.663000 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.667321 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" event={"ID":"f6ebe169-8b20-4d94-99b7-96afffcb5118","Type":"ContainerStarted","Data":"6030becdcf765cf15b70923de98b03ac3f2561b8e5be80b8946bd77d9ef89412"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.667717 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.669848 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.669904 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.671093 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" event={"ID":"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62","Type":"ContainerStarted","Data":"cf101369cf85c9674f018e8e895e73945a08e7b8ec5e2e56aeee4bfc9a2e83bd"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.677137 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" event={"ID":"26777afd-4d9f-4ebb-b8ed-0be018fa5a17","Type":"ContainerStarted","Data":"f814f4bbff8e72532ff093711ea65a354dd1db8cca317c54d4411dbc6c778eb3"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.677459 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.682802 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" event={"ID":"983beebe-f0c3-4fba-9861-0ea007559cc5","Type":"ContainerStarted","Data":"95649f7a5a4ff9cfecec97fc9c5e21fda60ba14f5af89649189d36a23b73d4e0"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.683283 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.683795 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.683845 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.686181 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" event={"ID":"50682373-a3d7-491e-84a0-1d5613ee2e8a","Type":"ContainerStarted","Data":"a3bd17b8623ecd9442143c4135a7a62281759fdaba53645b9e9dc41a8d3d923c"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.686396 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.692077 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" event={"ID":"697da6ae-2950-468c-82e9-bcb1a1af61e7","Type":"ContainerStarted","Data":"59c6fb267914bdebe741eccfd6ee9bce6f237394911b1eb50ef6e99d5ba8c574"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.693512 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.696868 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.697977 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"79dfe7194b0e62b23b4d4c5b70bd5155add0435bc59cf05863ad051dafed8b52"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.700107 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.703924 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" event={"ID":"be08d23e-d6c9-4b42-904b-c36b05dfc316","Type":"ContainerStarted","Data":"8300f6020fc08f440ad96282b353b926db5a3a000c1da77ecce205a6dbdb5ce9"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.712355 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"c3903129a5e050768bf859bb1f16a9a4faa90b6f347027f166bd372d2864fc1e"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.713176 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.715053 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" event={"ID":"1310770f-7cb7-4874-b2a0-4ef733911716","Type":"ContainerStarted","Data":"795b749a3a33ce2f2e0e93a9b99bed6b6918d451c67149a49a303136ad19d09d"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.715343 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.721751 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" event={"ID":"b5a0c28d-1434-40f0-8759-d76b65dc2c30","Type":"ContainerStarted","Data":"b6fd30c1f3fa4c72fa4fad22e370eedd84788dd55134f39780fa4592a5d6f2e8"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.722619 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.729433 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" event={"ID":"a0590b9a-abcc-4541-9914-675dc0ca1976","Type":"ContainerStarted","Data":"46a35fb2be17a2d04681a0d0859480bbc515d0d735d4c5a112baba7d5a412ce1"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.730733 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.730797 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.730820 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.734707 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e","Type":"ContainerStarted","Data":"62cf1c8a35444574b7b1bf54c306a32a089ff1b805c5da39eba8f5950a3493b1"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.734932 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.734989 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.735093 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.735125 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.452748 4985 patch_prober.go:28] interesting pod/loki-operator-controller-manager-85fc96dbd6-9qljj container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": dial tcp 10.217.0.48:8081: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.453118 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podUID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": dial tcp 10.217.0.48:8081: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: E0128 20:04:32.585006 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:32 crc kubenswrapper[4985]: E0128 20:04:32.586743 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:32 crc kubenswrapper[4985]: E0128 20:04:32.594956 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:32 crc kubenswrapper[4985]: E0128 20:04:32.595051 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.747671 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" event={"ID":"38846228-cec9-4a59-b9bb-c766121dacde","Type":"ContainerStarted","Data":"08be1fbcf80783a420a679de05934fc91371f37013861c0aa0625fe62577273c"} Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.780427 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-qnrsp_cb7bad3c-725d-4a80-b398-140c6acf3825/router/0.log" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.780717 4985 generic.go:334] "Generic (PLEG): container finished" podID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerID="8451ecb74d3c5ee99cec821aaa47c7970df959ecd8df15b6c7cf52a433376f5a" exitCode=137 Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.780877 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qnrsp" event={"ID":"cb7bad3c-725d-4a80-b398-140c6acf3825","Type":"ContainerDied","Data":"8451ecb74d3c5ee99cec821aaa47c7970df959ecd8df15b6c7cf52a433376f5a"} Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.781951 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.781953 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782002 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782019 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782032 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782058 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.781967 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782115 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.781998 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782147 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782568 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782594 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 28 20:04:33 crc kubenswrapper[4985]: I0128 20:04:33.364789 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:33 crc kubenswrapper[4985]: I0128 20:04:33.365091 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:33 crc kubenswrapper[4985]: I0128 20:04:33.570581 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:33 crc kubenswrapper[4985]: I0128 20:04:33.792758 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wnjfp" event={"ID":"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7","Type":"ContainerStarted","Data":"a849f24b9864581dd1fe2b639b6520564fdc5a822b8e8b2ec44a366404a85f21"} Jan 28 20:04:33 crc kubenswrapper[4985]: I0128 20:04:33.793212 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Jan 28 20:04:33 crc kubenswrapper[4985]: I0128 20:04:33.793266 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.323691 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.338103 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.390327 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.466768 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" start-of-body= Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.466818 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.805929 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" event={"ID":"367b6525-0367-437a-9fe3-b2007411f4af","Type":"ContainerStarted","Data":"383dc81fb4b4a5055cd5226673e95c8f2bf67e8261407836fb4486ddc158608e"} Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.808267 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" event={"ID":"cc7f29e1-e6e0-45a0-920a-4b18d8204c65","Type":"ContainerStarted","Data":"b0f57d31b5ba5bdf7f84edda1d7123574e48b9a33672903e2bca66b75ebad7c3"} Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.808495 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.810561 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" event={"ID":"9c7284ab-b40f-4275-b85e-77aebd660135","Type":"ContainerStarted","Data":"7dd6cc1f217c705b4dd69f055fd838f5aa8de08ac32385e99de36645a10be038"} Jan 28 20:04:35 crc kubenswrapper[4985]: I0128 20:04:35.722677 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 20:04:35 crc kubenswrapper[4985]: I0128 20:04:35.850075 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 20:04:35 crc kubenswrapper[4985]: I0128 20:04:35.850517 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.108061 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": dial tcp 10.217.0.44:6080: connect: connection refused" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.140759 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.365509 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.365779 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.383926 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.514666 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.691807 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.692151 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.693353 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"ef7af7392a0a8e8daafa4c29f9a0b623ca6d2a81cb96174c2ed68ac2c092ef4e"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.693426 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerName="cinder-scheduler" containerID="cri-o://ef7af7392a0a8e8daafa4c29f9a0b623ca6d2a81cb96174c2ed68ac2c092ef4e" gracePeriod=30 Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.761767 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v2zt6"] Jan 28 20:04:36 crc kubenswrapper[4985]: E0128 20:04:36.763768 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="registry-server" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.763789 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="registry-server" Jan 28 20:04:36 crc kubenswrapper[4985]: E0128 20:04:36.763827 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="extract-utilities" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.763836 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="extract-utilities" Jan 28 20:04:36 crc kubenswrapper[4985]: E0128 20:04:36.763863 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="extract-content" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.763868 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="extract-content" Jan 28 20:04:36 crc kubenswrapper[4985]: E0128 20:04:36.764155 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c901d430-df5f-4afa-8a40-9ed18d2ad552" containerName="keystone-cron" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.764169 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c901d430-df5f-4afa-8a40-9ed18d2ad552" containerName="keystone-cron" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.764634 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c901d430-df5f-4afa-8a40-9ed18d2ad552" containerName="keystone-cron" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.764675 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="registry-server" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.770261 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.881284 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad9c3c9-3333-4c1b-a020-2322b7baae36-catalog-content\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.881492 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m4mp\" (UniqueName: \"kubernetes.io/projected/bad9c3c9-3333-4c1b-a020-2322b7baae36-kube-api-access-8m4mp\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.881661 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad9c3c9-3333-4c1b-a020-2322b7baae36-utilities\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.890654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerStarted","Data":"8e001b6717573e47dde036853c9600484c643d17dfa3271afbc9f87f864ba6a8"} Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.910530 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerStarted","Data":"327771973a3d1d6a1a4aac847d6c2739715a8a362c1daaaa13d4585cae663b69"} Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.948644 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerStarted","Data":"f1ef70d944bea9183ea8dcafb63b98535f5e207813d52b5b82a42152b36c3f5a"} Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.963896 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-qnrsp_cb7bad3c-725d-4a80-b398-140c6acf3825/router/0.log" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.963967 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qnrsp" event={"ID":"cb7bad3c-725d-4a80-b398-140c6acf3825","Type":"ContainerStarted","Data":"013f0faf90e02d1c24593266d641dd3c59feb576f4d2fe401f9b506336ce4275"} Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.967628 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" event={"ID":"91971c24-6187-432c-84ba-65dba69b4598","Type":"ContainerStarted","Data":"7fd72ebd7aa35111b94e40f5fdc7771a59db814f8d1383cc484b15cf6b357e93"} Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.968471 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.989397 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad9c3c9-3333-4c1b-a020-2322b7baae36-catalog-content\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.989480 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m4mp\" (UniqueName: \"kubernetes.io/projected/bad9c3c9-3333-4c1b-a020-2322b7baae36-kube-api-access-8m4mp\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.989572 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad9c3c9-3333-4c1b-a020-2322b7baae36-utilities\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.991097 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad9c3c9-3333-4c1b-a020-2322b7baae36-catalog-content\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.992041 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad9c3c9-3333-4c1b-a020-2322b7baae36-utilities\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.993639 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" event={"ID":"715ad1e8-6659-4a18-a007-ad31ffa7044e","Type":"ContainerStarted","Data":"94549d4f8e9257f2f1d2669248959bfed37ae938a6f3fe3e0192d7940abaaabe"} Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.994725 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.013720 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" event={"ID":"a23ac89d-75e4-4511-afaa-ef9d6205a672","Type":"ContainerStarted","Data":"87ecffcc4f224ebf860a9f0c28bb447716191ca7e79dcd0ed492e3dd7b582097"} Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.013775 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.014134 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" start-of-body= Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.014176 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.054281 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.093377 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v2zt6"] Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.132715 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.140780 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m4mp\" (UniqueName: \"kubernetes.io/projected/bad9c3c9-3333-4c1b-a020-2322b7baae36-kube-api-access-8m4mp\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.154909 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.263707 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:04:37 crc kubenswrapper[4985]: E0128 20:04:37.264082 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.295138 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.311385 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.371105 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.430593 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.572700 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.627172 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.629891 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.629984 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.731017 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.735013 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.960604 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.023936 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" start-of-body= Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.023995 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.136558 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.136599 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.152017 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.160545 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.597278 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.616309 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.620441 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.630386 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.745040 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.024225 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.074711 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.314772 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.367459 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.367521 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.377145 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.529465 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.529532 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.559488 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.623051 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.928040 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.041698 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.047319 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.089717 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.344271 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.452430 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output=< Jan 28 20:04:40 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:04:40 crc kubenswrapper[4985]: > Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.599543 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output=< Jan 28 20:04:40 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:04:40 crc kubenswrapper[4985]: > Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.638420 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.670427 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.966006 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6lq6d" Jan 28 20:04:41 crc kubenswrapper[4985]: E0128 20:04:41.083580 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:41 crc kubenswrapper[4985]: E0128 20:04:41.085707 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:41 crc kubenswrapper[4985]: E0128 20:04:41.088286 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:41 crc kubenswrapper[4985]: E0128 20:04:41.088343 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" Jan 28 20:04:41 crc kubenswrapper[4985]: I0128 20:04:41.111507 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 20:04:42 crc kubenswrapper[4985]: I0128 20:04:42.066777 4985 generic.go:334] "Generic (PLEG): container finished" podID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerID="ef7af7392a0a8e8daafa4c29f9a0b623ca6d2a81cb96174c2ed68ac2c092ef4e" exitCode=0 Jan 28 20:04:42 crc kubenswrapper[4985]: I0128 20:04:42.066855 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07cf4e1d-9eb6-491a-90a5-dc30af589bc0","Type":"ContainerDied","Data":"ef7af7392a0a8e8daafa4c29f9a0b623ca6d2a81cb96174c2ed68ac2c092ef4e"} Jan 28 20:04:42 crc kubenswrapper[4985]: I0128 20:04:42.298664 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 20:04:42 crc kubenswrapper[4985]: E0128 20:04:42.583513 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0 is running failed: container process not found" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:42 crc kubenswrapper[4985]: E0128 20:04:42.584204 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0 is running failed: container process not found" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:42 crc kubenswrapper[4985]: E0128 20:04:42.584675 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0 is running failed: container process not found" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:42 crc kubenswrapper[4985]: E0128 20:04:42.584732 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0 is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" Jan 28 20:04:42 crc kubenswrapper[4985]: I0128 20:04:42.642685 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 20:04:42 crc kubenswrapper[4985]: I0128 20:04:42.779873 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v2zt6"] Jan 28 20:04:42 crc kubenswrapper[4985]: W0128 20:04:42.787410 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbad9c3c9_3333_4c1b_a020_2322b7baae36.slice/crio-08b1b1e12469811d9c19f1e7452483bdf6acdac16131f7fb57f9e0c1435fe84e WatchSource:0}: Error finding container 08b1b1e12469811d9c19f1e7452483bdf6acdac16131f7fb57f9e0c1435fe84e: Status 404 returned error can't find the container with id 08b1b1e12469811d9c19f1e7452483bdf6acdac16131f7fb57f9e0c1435fe84e Jan 28 20:04:43 crc kubenswrapper[4985]: I0128 20:04:43.081418 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2zt6" event={"ID":"bad9c3c9-3333-4c1b-a020-2322b7baae36","Type":"ContainerStarted","Data":"08b1b1e12469811d9c19f1e7452483bdf6acdac16131f7fb57f9e0c1435fe84e"} Jan 28 20:04:43 crc kubenswrapper[4985]: I0128 20:04:43.102268 4985 generic.go:334] "Generic (PLEG): container finished" podID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" exitCode=0 Jan 28 20:04:43 crc kubenswrapper[4985]: I0128 20:04:43.102274 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8253e52-6b52-45a9-b5d6-680d3dfbebe7","Type":"ContainerDied","Data":"c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0"} Jan 28 20:04:44 crc kubenswrapper[4985]: I0128 20:04:44.124612 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8253e52-6b52-45a9-b5d6-680d3dfbebe7","Type":"ContainerStarted","Data":"51e4ab062d26e9e62e405b43c5cfb6090cbfd4b202868d6cc4c9d661f9ad3c35"} Jan 28 20:04:44 crc kubenswrapper[4985]: I0128 20:04:44.128973 4985 generic.go:334] "Generic (PLEG): container finished" podID="bad9c3c9-3333-4c1b-a020-2322b7baae36" containerID="a7f98dc1c4a3f422e11a1269332fcfae432cf598bd7e84e2b3508e5031e3a6e3" exitCode=0 Jan 28 20:04:44 crc kubenswrapper[4985]: I0128 20:04:44.129023 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2zt6" event={"ID":"bad9c3c9-3333-4c1b-a020-2322b7baae36","Type":"ContainerDied","Data":"a7f98dc1c4a3f422e11a1269332fcfae432cf598bd7e84e2b3508e5031e3a6e3"} Jan 28 20:04:44 crc kubenswrapper[4985]: I0128 20:04:44.466402 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 20:04:45 crc kubenswrapper[4985]: I0128 20:04:45.970111 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 20:04:46 crc kubenswrapper[4985]: I0128 20:04:46.505815 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 20:04:46 crc kubenswrapper[4985]: I0128 20:04:46.733161 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 20:04:47 crc kubenswrapper[4985]: I0128 20:04:47.148568 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 20:04:47 crc kubenswrapper[4985]: I0128 20:04:47.183008 4985 generic.go:334] "Generic (PLEG): container finished" podID="a808dc72-a951-4f07-a612-2fde39a49a30" containerID="ee163311dba6c1ce70ff2544f9371712e8075bba77bbad31800b493e5588741e" exitCode=1 Jan 28 20:04:47 crc kubenswrapper[4985]: I0128 20:04:47.183064 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a808dc72-a951-4f07-a612-2fde39a49a30","Type":"ContainerDied","Data":"ee163311dba6c1ce70ff2544f9371712e8075bba77bbad31800b493e5588741e"} Jan 28 20:04:47 crc kubenswrapper[4985]: I0128 20:04:47.576639 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 20:04:48 crc kubenswrapper[4985]: I0128 20:04:48.198466 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07cf4e1d-9eb6-491a-90a5-dc30af589bc0","Type":"ContainerStarted","Data":"1f48d3ab4b19cf2cebcfdbbc33f325595adb0916611634a71eb5111f8e383743"} Jan 28 20:04:48 crc kubenswrapper[4985]: I0128 20:04:48.203032 4985 generic.go:334] "Generic (PLEG): container finished" podID="99828525-9397-448d-9a51-bc0da88038ac" containerID="eedf56963284f4f02b309064398b6a7be6c00026bb391ec849a54c864758f409" exitCode=137 Jan 28 20:04:48 crc kubenswrapper[4985]: I0128 20:04:48.203183 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerDied","Data":"eedf56963284f4f02b309064398b6a7be6c00026bb391ec849a54c864758f409"} Jan 28 20:04:49 crc kubenswrapper[4985]: I0128 20:04:49.371775 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 20:04:50 crc kubenswrapper[4985]: I0128 20:04:50.470358 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output=< Jan 28 20:04:50 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:04:50 crc kubenswrapper[4985]: > Jan 28 20:04:50 crc kubenswrapper[4985]: I0128 20:04:50.600199 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output=< Jan 28 20:04:50 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:04:50 crc kubenswrapper[4985]: > Jan 28 20:04:51 crc kubenswrapper[4985]: E0128 20:04:51.079638 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:51 crc kubenswrapper[4985]: E0128 20:04:51.081148 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:51 crc kubenswrapper[4985]: E0128 20:04:51.082870 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:51 crc kubenswrapper[4985]: E0128 20:04:51.082911 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" Jan 28 20:04:51 crc kubenswrapper[4985]: W0128 20:04:51.575497 4985 logging.go:55] [core] [Channel #7181 SubChannel #7182]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", }. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused" Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.843850 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" containerID="cri-o://47b2958f11c39ade31c2e91339ddcd95d53ee549c27d8c34ef46c24ef5c02a95" gracePeriod=15 Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.901363 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.983840 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ssh-key\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.983898 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984012 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984075 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ca-certs\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984169 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-workdir\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984311 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5tss\" (UniqueName: \"kubernetes.io/projected/a808dc72-a951-4f07-a612-2fde39a49a30-kube-api-access-f5tss\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984390 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config-secret\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984545 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-temporary\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984588 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-config-data\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.999042 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.000905 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-config-data" (OuterVolumeSpecName: "config-data") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.006962 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.015007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a808dc72-a951-4f07-a612-2fde39a49a30-kube-api-access-f5tss" (OuterVolumeSpecName: "kube-api-access-f5tss") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "kube-api-access-f5tss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.017077 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.078164 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.090652 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095041 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5tss\" (UniqueName: \"kubernetes.io/projected/a808dc72-a951-4f07-a612-2fde39a49a30-kube-api-access-f5tss\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095152 4985 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095487 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095640 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095661 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095677 4985 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095689 4985 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.105494 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.110230 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.149037 4985 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.198640 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.198673 4985 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.198683 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.260809 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a808dc72-a951-4f07-a612-2fde39a49a30","Type":"ContainerDied","Data":"8ac53f28924ef34914b8f13ae4189420fe54cce41ee264f85ce7e1f954e89840"} Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.260828 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.262143 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ac53f28924ef34914b8f13ae4189420fe54cce41ee264f85ce7e1f954e89840" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.263046 4985 generic.go:334] "Generic (PLEG): container finished" podID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerID="47b2958f11c39ade31c2e91339ddcd95d53ee549c27d8c34ef46c24ef5c02a95" exitCode=0 Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.263090 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" event={"ID":"f077e962-d9b2-45c5-a87e-1dd03ad0378b","Type":"ContainerDied","Data":"47b2958f11c39ade31c2e91339ddcd95d53ee549c27d8c34ef46c24ef5c02a95"} Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.264156 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.527182 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.584123 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.584610 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.661483 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.309557 4985 generic.go:334] "Generic (PLEG): container finished" podID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" exitCode=0 Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.309864 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8","Type":"ContainerDied","Data":"e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83"} Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.313569 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"feb11cf010e066de1428423731282f1a1bf65ec6e9b804a07c16b386b1f6b3a9"} Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.323908 4985 generic.go:334] "Generic (PLEG): container finished" podID="99828525-9397-448d-9a51-bc0da88038ac" containerID="82bed0d8a42bca7e53b39c9544bdc0936cdb44ffd82eeecb67a51d1676f725c4" exitCode=1 Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.323994 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerDied","Data":"82bed0d8a42bca7e53b39c9544bdc0936cdb44ffd82eeecb67a51d1676f725c4"} Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.334363 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" event={"ID":"f077e962-d9b2-45c5-a87e-1dd03ad0378b","Type":"ContainerStarted","Data":"0a982d845a9f831e0c88084af06f221301b67133998c9991352ecbfc3bd42961"} Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.334432 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.335140 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" start-of-body= Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.335203 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" Jan 28 20:04:54 crc kubenswrapper[4985]: I0128 20:04:54.344989 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2zt6" event={"ID":"bad9c3c9-3333-4c1b-a020-2322b7baae36","Type":"ContainerStarted","Data":"67e6340b7385cbd4895b294330f0737f97a1d0e6a21067e4bee9b734f5e32783"} Jan 28 20:04:54 crc kubenswrapper[4985]: I0128 20:04:54.347888 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8","Type":"ContainerStarted","Data":"ed2f8091895e95a2db82aadc41dd96eee2d0cdbf5f2ca90e286001883ce27f4f"} Jan 28 20:04:54 crc kubenswrapper[4985]: I0128 20:04:54.350663 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"4ee2d13f340a17f08093a19637dc0d1941ddfb300085d4915a7368b76c5f943f"} Jan 28 20:04:54 crc kubenswrapper[4985]: I0128 20:04:54.351642 4985 scope.go:117] "RemoveContainer" containerID="82bed0d8a42bca7e53b39c9544bdc0936cdb44ffd82eeecb67a51d1676f725c4" Jan 28 20:04:54 crc kubenswrapper[4985]: I0128 20:04:54.583626 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 20:04:57 crc kubenswrapper[4985]: I0128 20:04:57.564573 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:58 crc kubenswrapper[4985]: I0128 20:04:58.401611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"11542b426bbe009755598c19ce242a68de7b2bc4b2683f0e2c7891f10ceff9a3"} Jan 28 20:04:58 crc kubenswrapper[4985]: I0128 20:04:58.404607 4985 generic.go:334] "Generic (PLEG): container finished" podID="bad9c3c9-3333-4c1b-a020-2322b7baae36" containerID="67e6340b7385cbd4895b294330f0737f97a1d0e6a21067e4bee9b734f5e32783" exitCode=0 Jan 28 20:04:58 crc kubenswrapper[4985]: I0128 20:04:58.404640 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2zt6" event={"ID":"bad9c3c9-3333-4c1b-a020-2322b7baae36","Type":"ContainerDied","Data":"67e6340b7385cbd4895b294330f0737f97a1d0e6a21067e4bee9b734f5e32783"} Jan 28 20:04:58 crc kubenswrapper[4985]: I0128 20:04:58.960217 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 20:04:59 crc kubenswrapper[4985]: I0128 20:04:59.147657 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 20:04:59 crc kubenswrapper[4985]: I0128 20:04:59.446776 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-spssk_0762e6e7-b454-432f-91b7-b8cefccdc85e/registry-server/0.log" Jan 28 20:04:59 crc kubenswrapper[4985]: I0128 20:04:59.451115 4985 generic.go:334] "Generic (PLEG): container finished" podID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerID="2557bb987631cc8664db3ca41a93039f004fa96ab105b36b4deb767b758e348c" exitCode=137 Jan 28 20:04:59 crc kubenswrapper[4985]: I0128 20:04:59.452199 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerDied","Data":"2557bb987631cc8664db3ca41a93039f004fa96ab105b36b4deb767b758e348c"} Jan 28 20:05:00 crc kubenswrapper[4985]: I0128 20:05:00.464112 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-spssk_0762e6e7-b454-432f-91b7-b8cefccdc85e/registry-server/0.log" Jan 28 20:05:00 crc kubenswrapper[4985]: I0128 20:05:00.465339 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerStarted","Data":"2bd5b6a535cc49b2d36365b04fa8076e4297a92f613c32df8c333a3ba612f715"} Jan 28 20:05:00 crc kubenswrapper[4985]: I0128 20:05:00.468394 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2zt6" event={"ID":"bad9c3c9-3333-4c1b-a020-2322b7baae36","Type":"ContainerStarted","Data":"b501f5588865c688bdab98e0ea5fe0443eb390e5dbc5774e7319ee3d1a15949e"} Jan 28 20:05:00 crc kubenswrapper[4985]: I0128 20:05:00.514543 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v2zt6" podStartSLOduration=9.421382826 podStartE2EDuration="24.514522069s" podCreationTimestamp="2026-01-28 20:04:36 +0000 UTC" firstStartedPulling="2026-01-28 20:04:44.131721641 +0000 UTC m=+6694.958284482" lastFinishedPulling="2026-01-28 20:04:59.224860904 +0000 UTC m=+6710.051423725" observedRunningTime="2026-01-28 20:05:00.502176139 +0000 UTC m=+6711.328738960" watchObservedRunningTime="2026-01-28 20:05:00.514522069 +0000 UTC m=+6711.341084890" Jan 28 20:05:00 crc kubenswrapper[4985]: I0128 20:05:00.588184 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:00 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:00 crc kubenswrapper[4985]: > Jan 28 20:05:00 crc kubenswrapper[4985]: I0128 20:05:00.595166 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:00 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:00 crc kubenswrapper[4985]: > Jan 28 20:05:01 crc kubenswrapper[4985]: I0128 20:05:01.077343 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 20:05:01 crc kubenswrapper[4985]: I0128 20:05:01.077396 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 20:05:01 crc kubenswrapper[4985]: I0128 20:05:01.174695 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 20:05:01 crc kubenswrapper[4985]: I0128 20:05:01.596790 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.125088 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 20:05:02 crc kubenswrapper[4985]: E0128 20:05:02.126090 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a808dc72-a951-4f07-a612-2fde39a49a30" containerName="tempest-tests-tempest-tests-runner" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.126110 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a808dc72-a951-4f07-a612-2fde39a49a30" containerName="tempest-tests-tempest-tests-runner" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.126362 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a808dc72-a951-4f07-a612-2fde39a49a30" containerName="tempest-tests-tempest-tests-runner" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.127417 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.130232 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hb5cc" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.160860 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.286685 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.286866 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxfgp\" (UniqueName: \"kubernetes.io/projected/e5d86a77-6a87-4434-b571-f453639eb3a2-kube-api-access-dxfgp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.389159 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxfgp\" (UniqueName: \"kubernetes.io/projected/e5d86a77-6a87-4434-b571-f453639eb3a2-kube-api-access-dxfgp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.389416 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.389928 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.427824 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxfgp\" (UniqueName: \"kubernetes.io/projected/e5d86a77-6a87-4434-b571-f453639eb3a2-kube-api-access-dxfgp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.438160 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.461884 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.655679 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 20:05:03 crc kubenswrapper[4985]: I0128 20:05:03.195124 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 20:05:03 crc kubenswrapper[4985]: I0128 20:05:03.513595 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e5d86a77-6a87-4434-b571-f453639eb3a2","Type":"ContainerStarted","Data":"0056f7f17642c2708b2035e699df1829c6fce321931b2d5124b59cba9c26e7c3"} Jan 28 20:05:05 crc kubenswrapper[4985]: I0128 20:05:05.002714 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:05:05 crc kubenswrapper[4985]: I0128 20:05:05.002943 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:05:06 crc kubenswrapper[4985]: I0128 20:05:06.317527 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:06 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:06 crc kubenswrapper[4985]: > Jan 28 20:05:06 crc kubenswrapper[4985]: I0128 20:05:06.549794 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e5d86a77-6a87-4434-b571-f453639eb3a2","Type":"ContainerStarted","Data":"7aaae0d8282a48328faa48d3e48327c860f6172702ab7ed9d8c2a0952e1bfa3b"} Jan 28 20:05:06 crc kubenswrapper[4985]: I0128 20:05:06.569447 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.7737185229999999 podStartE2EDuration="4.569425449s" podCreationTimestamp="2026-01-28 20:05:02 +0000 UTC" firstStartedPulling="2026-01-28 20:05:03.215614338 +0000 UTC m=+6714.042177159" lastFinishedPulling="2026-01-28 20:05:06.011321264 +0000 UTC m=+6716.837884085" observedRunningTime="2026-01-28 20:05:06.562744439 +0000 UTC m=+6717.389307260" watchObservedRunningTime="2026-01-28 20:05:06.569425449 +0000 UTC m=+6717.395988270" Jan 28 20:05:07 crc kubenswrapper[4985]: I0128 20:05:07.431918 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:05:07 crc kubenswrapper[4985]: I0128 20:05:07.432592 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:05:08 crc kubenswrapper[4985]: I0128 20:05:08.391430 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 20:05:08 crc kubenswrapper[4985]: I0128 20:05:08.544919 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v2zt6" podUID="bad9c3c9-3333-4c1b-a020-2322b7baae36" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:08 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:08 crc kubenswrapper[4985]: > Jan 28 20:05:10 crc kubenswrapper[4985]: I0128 20:05:10.413632 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:10 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:10 crc kubenswrapper[4985]: > Jan 28 20:05:10 crc kubenswrapper[4985]: I0128 20:05:10.585736 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:10 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:10 crc kubenswrapper[4985]: > Jan 28 20:05:16 crc kubenswrapper[4985]: I0128 20:05:16.072713 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:16 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:16 crc kubenswrapper[4985]: > Jan 28 20:05:18 crc kubenswrapper[4985]: I0128 20:05:18.490112 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v2zt6" podUID="bad9c3c9-3333-4c1b-a020-2322b7baae36" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:18 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:18 crc kubenswrapper[4985]: > Jan 28 20:05:19 crc kubenswrapper[4985]: I0128 20:05:19.596419 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 20:05:19 crc kubenswrapper[4985]: I0128 20:05:19.661175 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 20:05:20 crc kubenswrapper[4985]: I0128 20:05:20.413389 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:20 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:20 crc kubenswrapper[4985]: > Jan 28 20:05:23 crc kubenswrapper[4985]: I0128 20:05:23.502830 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 20:05:26 crc kubenswrapper[4985]: I0128 20:05:26.056694 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:26 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:26 crc kubenswrapper[4985]: > Jan 28 20:05:27 crc kubenswrapper[4985]: I0128 20:05:27.482759 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:05:27 crc kubenswrapper[4985]: I0128 20:05:27.544566 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:05:28 crc kubenswrapper[4985]: I0128 20:05:28.574455 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v2zt6"] Jan 28 20:05:28 crc kubenswrapper[4985]: I0128 20:05:28.723144 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mclkd"] Jan 28 20:05:28 crc kubenswrapper[4985]: I0128 20:05:28.726956 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mclkd" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" containerID="cri-o://d1f355fd0c5fb9871aa2c5c6896e3fe364696f87e04f69db46add5786f956fc8" gracePeriod=2 Jan 28 20:05:28 crc kubenswrapper[4985]: I0128 20:05:28.890345 4985 generic.go:334] "Generic (PLEG): container finished" podID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerID="d1f355fd0c5fb9871aa2c5c6896e3fe364696f87e04f69db46add5786f956fc8" exitCode=0 Jan 28 20:05:28 crc kubenswrapper[4985]: I0128 20:05:28.890406 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerDied","Data":"d1f355fd0c5fb9871aa2c5c6896e3fe364696f87e04f69db46add5786f956fc8"} Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.407004 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.477412 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.905714 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerDied","Data":"9065c3cedcf2c522ec02096a476095855bf69695fefcb13d3535bb45ef54bf89"} Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.904175 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.906621 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9065c3cedcf2c522ec02096a476095855bf69695fefcb13d3535bb45ef54bf89" Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.957976 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nhmt\" (UniqueName: \"kubernetes.io/projected/1304efc2-5033-41c5-83b5-5df3edfde2f1-kube-api-access-4nhmt\") pod \"1304efc2-5033-41c5-83b5-5df3edfde2f1\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.958025 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-catalog-content\") pod \"1304efc2-5033-41c5-83b5-5df3edfde2f1\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.958106 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-utilities\") pod \"1304efc2-5033-41c5-83b5-5df3edfde2f1\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.960620 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-utilities" (OuterVolumeSpecName: "utilities") pod "1304efc2-5033-41c5-83b5-5df3edfde2f1" (UID: "1304efc2-5033-41c5-83b5-5df3edfde2f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.972481 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1304efc2-5033-41c5-83b5-5df3edfde2f1-kube-api-access-4nhmt" (OuterVolumeSpecName: "kube-api-access-4nhmt") pod "1304efc2-5033-41c5-83b5-5df3edfde2f1" (UID: "1304efc2-5033-41c5-83b5-5df3edfde2f1"). InnerVolumeSpecName "kube-api-access-4nhmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.008808 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1304efc2-5033-41c5-83b5-5df3edfde2f1" (UID: "1304efc2-5033-41c5-83b5-5df3edfde2f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.061544 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nhmt\" (UniqueName: \"kubernetes.io/projected/1304efc2-5033-41c5-83b5-5df3edfde2f1-kube-api-access-4nhmt\") on node \"crc\" DevicePath \"\"" Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.061578 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.061587 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.919485 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.957215 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mclkd"] Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.973045 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mclkd"] Jan 28 20:05:31 crc kubenswrapper[4985]: I0128 20:05:31.278782 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" path="/var/lib/kubelet/pods/1304efc2-5033-41c5-83b5-5df3edfde2f1/volumes" Jan 28 20:05:36 crc kubenswrapper[4985]: I0128 20:05:36.144209 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:36 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:36 crc kubenswrapper[4985]: > Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.110851 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sg6vz/must-gather-9vwtc"] Jan 28 20:05:37 crc kubenswrapper[4985]: E0128 20:05:37.113082 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="extract-content" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.113200 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="extract-content" Jan 28 20:05:37 crc kubenswrapper[4985]: E0128 20:05:37.113305 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="extract-utilities" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.113366 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="extract-utilities" Jan 28 20:05:37 crc kubenswrapper[4985]: E0128 20:05:37.113535 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.113605 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.115046 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.119911 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.136611 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-sg6vz"/"default-dockercfg-267h6" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.136628 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sg6vz"/"openshift-service-ca.crt" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.136848 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sg6vz"/"kube-root-ca.crt" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.227779 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sg6vz/must-gather-9vwtc"] Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.265787 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7qn6\" (UniqueName: \"kubernetes.io/projected/b1ab1977-13f1-41b6-9edd-cbb936fb8485-kube-api-access-j7qn6\") pod \"must-gather-9vwtc\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.265960 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b1ab1977-13f1-41b6-9edd-cbb936fb8485-must-gather-output\") pod \"must-gather-9vwtc\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.368579 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b1ab1977-13f1-41b6-9edd-cbb936fb8485-must-gather-output\") pod \"must-gather-9vwtc\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.368909 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7qn6\" (UniqueName: \"kubernetes.io/projected/b1ab1977-13f1-41b6-9edd-cbb936fb8485-kube-api-access-j7qn6\") pod \"must-gather-9vwtc\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.372953 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b1ab1977-13f1-41b6-9edd-cbb936fb8485-must-gather-output\") pod \"must-gather-9vwtc\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.402608 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7qn6\" (UniqueName: \"kubernetes.io/projected/b1ab1977-13f1-41b6-9edd-cbb936fb8485-kube-api-access-j7qn6\") pod \"must-gather-9vwtc\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.445039 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:38 crc kubenswrapper[4985]: I0128 20:05:38.076795 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sg6vz/must-gather-9vwtc"] Jan 28 20:05:38 crc kubenswrapper[4985]: I0128 20:05:38.100377 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 20:05:38 crc kubenswrapper[4985]: I0128 20:05:38.180809 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" event={"ID":"b1ab1977-13f1-41b6-9edd-cbb936fb8485","Type":"ContainerStarted","Data":"06c49318f7af370af69f8377123b97a103b8ab3290738fc3695d6344614a2de1"} Jan 28 20:05:46 crc kubenswrapper[4985]: I0128 20:05:46.178306 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:46 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:46 crc kubenswrapper[4985]: > Jan 28 20:05:47 crc kubenswrapper[4985]: I0128 20:05:47.927658 4985 scope.go:117] "RemoveContainer" containerID="d1f355fd0c5fb9871aa2c5c6896e3fe364696f87e04f69db46add5786f956fc8" Jan 28 20:05:49 crc kubenswrapper[4985]: I0128 20:05:49.182052 4985 scope.go:117] "RemoveContainer" containerID="13c932ede5b3e566b7752d12093b1dd4c26483b9039f367f6e4ba1e8e603bf3f" Jan 28 20:05:49 crc kubenswrapper[4985]: I0128 20:05:49.272619 4985 scope.go:117] "RemoveContainer" containerID="14a134cc6d453f346b75c36ad477bc28fbbffdb8a4403d5d30532b761990a0da" Jan 28 20:05:49 crc kubenswrapper[4985]: E0128 20:05:49.334065 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14a134cc6d453f346b75c36ad477bc28fbbffdb8a4403d5d30532b761990a0da\": container with ID starting with 14a134cc6d453f346b75c36ad477bc28fbbffdb8a4403d5d30532b761990a0da not found: ID does not exist" containerID="14a134cc6d453f346b75c36ad477bc28fbbffdb8a4403d5d30532b761990a0da" Jan 28 20:05:50 crc kubenswrapper[4985]: I0128 20:05:50.352681 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" event={"ID":"b1ab1977-13f1-41b6-9edd-cbb936fb8485","Type":"ContainerStarted","Data":"5355598335d0d9dff197dc4d09b9b325ee69e3336b9f5be9371d1aa865456367"} Jan 28 20:05:50 crc kubenswrapper[4985]: I0128 20:05:50.353010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" event={"ID":"b1ab1977-13f1-41b6-9edd-cbb936fb8485","Type":"ContainerStarted","Data":"0f940a9e21cc7bcb3783698fe185a88cc577a4e11e2a41301793da71c8090629"} Jan 28 20:05:50 crc kubenswrapper[4985]: I0128 20:05:50.382408 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" podStartSLOduration=2.205996132 podStartE2EDuration="13.38238655s" podCreationTimestamp="2026-01-28 20:05:37 +0000 UTC" firstStartedPulling="2026-01-28 20:05:38.096108051 +0000 UTC m=+6748.922670882" lastFinishedPulling="2026-01-28 20:05:49.272498479 +0000 UTC m=+6760.099061300" observedRunningTime="2026-01-28 20:05:50.368766614 +0000 UTC m=+6761.195329435" watchObservedRunningTime="2026-01-28 20:05:50.38238655 +0000 UTC m=+6761.208949371" Jan 28 20:05:55 crc kubenswrapper[4985]: I0128 20:05:55.076691 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:05:55 crc kubenswrapper[4985]: I0128 20:05:55.131346 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.186671 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-tsjq4"] Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.188582 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.360751 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hjd4\" (UniqueName: \"kubernetes.io/projected/e4275dde-20a8-4f67-8ad6-3599ced73c5a-kube-api-access-7hjd4\") pod \"crc-debug-tsjq4\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.360973 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4275dde-20a8-4f67-8ad6-3599ced73c5a-host\") pod \"crc-debug-tsjq4\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.463472 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hjd4\" (UniqueName: \"kubernetes.io/projected/e4275dde-20a8-4f67-8ad6-3599ced73c5a-kube-api-access-7hjd4\") pod \"crc-debug-tsjq4\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.463634 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4275dde-20a8-4f67-8ad6-3599ced73c5a-host\") pod \"crc-debug-tsjq4\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.465303 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4275dde-20a8-4f67-8ad6-3599ced73c5a-host\") pod \"crc-debug-tsjq4\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.483823 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hjd4\" (UniqueName: \"kubernetes.io/projected/e4275dde-20a8-4f67-8ad6-3599ced73c5a-kube-api-access-7hjd4\") pod \"crc-debug-tsjq4\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.512900 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:57 crc kubenswrapper[4985]: I0128 20:05:57.471426 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" event={"ID":"e4275dde-20a8-4f67-8ad6-3599ced73c5a","Type":"ContainerStarted","Data":"788e0621889e18f29167784cbe9d1a5ffba373376c1a278b0e926707a59d5ab2"} Jan 28 20:05:58 crc kubenswrapper[4985]: I0128 20:05:58.651348 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spssk"] Jan 28 20:05:58 crc kubenswrapper[4985]: I0128 20:05:58.651939 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" containerID="cri-o://2bd5b6a535cc49b2d36365b04fa8076e4297a92f613c32df8c333a3ba612f715" gracePeriod=2 Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.547472 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-spssk_0762e6e7-b454-432f-91b7-b8cefccdc85e/registry-server/0.log" Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.560200 4985 generic.go:334] "Generic (PLEG): container finished" podID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerID="2bd5b6a535cc49b2d36365b04fa8076e4297a92f613c32df8c333a3ba612f715" exitCode=0 Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.560241 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerDied","Data":"2bd5b6a535cc49b2d36365b04fa8076e4297a92f613c32df8c333a3ba612f715"} Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.560297 4985 scope.go:117] "RemoveContainer" containerID="2557bb987631cc8664db3ca41a93039f004fa96ab105b36b4deb767b758e348c" Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.801287 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.881348 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-catalog-content\") pod \"0762e6e7-b454-432f-91b7-b8cefccdc85e\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.881595 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blvfb\" (UniqueName: \"kubernetes.io/projected/0762e6e7-b454-432f-91b7-b8cefccdc85e-kube-api-access-blvfb\") pod \"0762e6e7-b454-432f-91b7-b8cefccdc85e\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.881755 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-utilities\") pod \"0762e6e7-b454-432f-91b7-b8cefccdc85e\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.884637 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-utilities" (OuterVolumeSpecName: "utilities") pod "0762e6e7-b454-432f-91b7-b8cefccdc85e" (UID: "0762e6e7-b454-432f-91b7-b8cefccdc85e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.919911 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0762e6e7-b454-432f-91b7-b8cefccdc85e-kube-api-access-blvfb" (OuterVolumeSpecName: "kube-api-access-blvfb") pod "0762e6e7-b454-432f-91b7-b8cefccdc85e" (UID: "0762e6e7-b454-432f-91b7-b8cefccdc85e"). InnerVolumeSpecName "kube-api-access-blvfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.984834 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blvfb\" (UniqueName: \"kubernetes.io/projected/0762e6e7-b454-432f-91b7-b8cefccdc85e-kube-api-access-blvfb\") on node \"crc\" DevicePath \"\"" Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.984965 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.039418 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0762e6e7-b454-432f-91b7-b8cefccdc85e" (UID: "0762e6e7-b454-432f-91b7-b8cefccdc85e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.087013 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.573195 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerDied","Data":"28f0a59519c9b60c4ce3a2ff63447bff887c38b436a2ce97a8fb8d2c39a8e834"} Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.573505 4985 scope.go:117] "RemoveContainer" containerID="2bd5b6a535cc49b2d36365b04fa8076e4297a92f613c32df8c333a3ba612f715" Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.573712 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.608852 4985 scope.go:117] "RemoveContainer" containerID="dda8ac60f550a2e96f02464275f0b11a82d9a3d53d2e2270e9d67c06ea4c3b44" Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.616058 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spssk"] Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.627995 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-spssk"] Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.641845 4985 scope.go:117] "RemoveContainer" containerID="3c2283779a914e25036c37ef2827bd05492395f0fd0244baa58d85cf05f996a1" Jan 28 20:06:01 crc kubenswrapper[4985]: I0128 20:06:01.280556 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" path="/var/lib/kubelet/pods/0762e6e7-b454-432f-91b7-b8cefccdc85e/volumes" Jan 28 20:06:08 crc kubenswrapper[4985]: I0128 20:06:08.304595 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.442366604s: [/var/lib/containers/storage/overlay/2b74aa33c03668223a87dd3c1ff4a84a09224e18713c6538d4c947dab78be4d8/diff /var/log/pods/openstack_openstackclient_1d8f391e-0ed3-4969-b61b-5b9d602644fa/openstackclient/0.log]; will not log again for this container unless duration exceeds 3s Jan 28 20:06:10 crc kubenswrapper[4985]: I0128 20:06:10.701841 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" event={"ID":"e4275dde-20a8-4f67-8ad6-3599ced73c5a","Type":"ContainerStarted","Data":"6f9e46511089ed1317a6f65cf916f19a8e3ebe9ec1c94201d055df23d13e16ad"} Jan 28 20:06:10 crc kubenswrapper[4985]: I0128 20:06:10.722137 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" podStartSLOduration=1.256446869 podStartE2EDuration="14.722120885s" podCreationTimestamp="2026-01-28 20:05:56 +0000 UTC" firstStartedPulling="2026-01-28 20:05:56.56651735 +0000 UTC m=+6767.393080171" lastFinishedPulling="2026-01-28 20:06:10.032191366 +0000 UTC m=+6780.858754187" observedRunningTime="2026-01-28 20:06:10.715986461 +0000 UTC m=+6781.542549282" watchObservedRunningTime="2026-01-28 20:06:10.722120885 +0000 UTC m=+6781.548683706" Jan 28 20:06:47 crc kubenswrapper[4985]: I0128 20:06:47.161429 4985 generic.go:334] "Generic (PLEG): container finished" podID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerID="7dd77068bf3eb2a91485c6b77d6e558f0ea9cb261db063d16cb699f2d789cd1d" exitCode=0 Jan 28 20:06:47 crc kubenswrapper[4985]: I0128 20:06:47.161500 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" event={"ID":"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde","Type":"ContainerDied","Data":"7dd77068bf3eb2a91485c6b77d6e558f0ea9cb261db063d16cb699f2d789cd1d"} Jan 28 20:06:48 crc kubenswrapper[4985]: I0128 20:06:48.193153 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" event={"ID":"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde","Type":"ContainerStarted","Data":"127164fb038939b87b998bbc470dbfa25a25034bad6586262e8b9900a8bf292f"} Jan 28 20:07:02 crc kubenswrapper[4985]: I0128 20:07:02.368311 4985 generic.go:334] "Generic (PLEG): container finished" podID="e4275dde-20a8-4f67-8ad6-3599ced73c5a" containerID="6f9e46511089ed1317a6f65cf916f19a8e3ebe9ec1c94201d055df23d13e16ad" exitCode=0 Jan 28 20:07:02 crc kubenswrapper[4985]: I0128 20:07:02.368423 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" event={"ID":"e4275dde-20a8-4f67-8ad6-3599ced73c5a","Type":"ContainerDied","Data":"6f9e46511089ed1317a6f65cf916f19a8e3ebe9ec1c94201d055df23d13e16ad"} Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.514196 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.566183 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-tsjq4"] Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.576907 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-tsjq4"] Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.624769 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4275dde-20a8-4f67-8ad6-3599ced73c5a-host\") pod \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.624898 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4275dde-20a8-4f67-8ad6-3599ced73c5a-host" (OuterVolumeSpecName: "host") pod "e4275dde-20a8-4f67-8ad6-3599ced73c5a" (UID: "e4275dde-20a8-4f67-8ad6-3599ced73c5a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.624987 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hjd4\" (UniqueName: \"kubernetes.io/projected/e4275dde-20a8-4f67-8ad6-3599ced73c5a-kube-api-access-7hjd4\") pod \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.625730 4985 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4275dde-20a8-4f67-8ad6-3599ced73c5a-host\") on node \"crc\" DevicePath \"\"" Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.635056 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4275dde-20a8-4f67-8ad6-3599ced73c5a-kube-api-access-7hjd4" (OuterVolumeSpecName: "kube-api-access-7hjd4") pod "e4275dde-20a8-4f67-8ad6-3599ced73c5a" (UID: "e4275dde-20a8-4f67-8ad6-3599ced73c5a"). InnerVolumeSpecName "kube-api-access-7hjd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.728242 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hjd4\" (UniqueName: \"kubernetes.io/projected/e4275dde-20a8-4f67-8ad6-3599ced73c5a-kube-api-access-7hjd4\") on node \"crc\" DevicePath \"\"" Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.766852 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.766899 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.394330 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="788e0621889e18f29167784cbe9d1a5ffba373376c1a278b0e926707a59d5ab2" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.394660 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.769889 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-qpf2f"] Jan 28 20:07:04 crc kubenswrapper[4985]: E0128 20:07:04.770444 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4275dde-20a8-4f67-8ad6-3599ced73c5a" containerName="container-00" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770460 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4275dde-20a8-4f67-8ad6-3599ced73c5a" containerName="container-00" Jan 28 20:07:04 crc kubenswrapper[4985]: E0128 20:07:04.770475 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770481 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" Jan 28 20:07:04 crc kubenswrapper[4985]: E0128 20:07:04.770516 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770522 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" Jan 28 20:07:04 crc kubenswrapper[4985]: E0128 20:07:04.770539 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="extract-content" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770544 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="extract-content" Jan 28 20:07:04 crc kubenswrapper[4985]: E0128 20:07:04.770560 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="extract-utilities" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770566 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="extract-utilities" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770771 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770809 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4275dde-20a8-4f67-8ad6-3599ced73c5a" containerName="container-00" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.771673 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.850675 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6sjh\" (UniqueName: \"kubernetes.io/projected/6b22e0bb-441d-4cda-8e55-82ad8593f13c-kube-api-access-v6sjh\") pod \"crc-debug-qpf2f\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.850902 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b22e0bb-441d-4cda-8e55-82ad8593f13c-host\") pod \"crc-debug-qpf2f\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.952512 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b22e0bb-441d-4cda-8e55-82ad8593f13c-host\") pod \"crc-debug-qpf2f\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.952619 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6sjh\" (UniqueName: \"kubernetes.io/projected/6b22e0bb-441d-4cda-8e55-82ad8593f13c-kube-api-access-v6sjh\") pod \"crc-debug-qpf2f\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.952940 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b22e0bb-441d-4cda-8e55-82ad8593f13c-host\") pod \"crc-debug-qpf2f\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.971355 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6sjh\" (UniqueName: \"kubernetes.io/projected/6b22e0bb-441d-4cda-8e55-82ad8593f13c-kube-api-access-v6sjh\") pod \"crc-debug-qpf2f\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:05 crc kubenswrapper[4985]: I0128 20:07:05.088775 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:05 crc kubenswrapper[4985]: W0128 20:07:05.157707 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b22e0bb_441d_4cda_8e55_82ad8593f13c.slice/crio-b4e76df559edd283a4370762cdcd629371fe973ac4826e5a9899565f84b4b3e3 WatchSource:0}: Error finding container b4e76df559edd283a4370762cdcd629371fe973ac4826e5a9899565f84b4b3e3: Status 404 returned error can't find the container with id b4e76df559edd283a4370762cdcd629371fe973ac4826e5a9899565f84b4b3e3 Jan 28 20:07:05 crc kubenswrapper[4985]: I0128 20:07:05.281014 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4275dde-20a8-4f67-8ad6-3599ced73c5a" path="/var/lib/kubelet/pods/e4275dde-20a8-4f67-8ad6-3599ced73c5a/volumes" Jan 28 20:07:05 crc kubenswrapper[4985]: I0128 20:07:05.413798 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" event={"ID":"6b22e0bb-441d-4cda-8e55-82ad8593f13c","Type":"ContainerStarted","Data":"b4e76df559edd283a4370762cdcd629371fe973ac4826e5a9899565f84b4b3e3"} Jan 28 20:07:06 crc kubenswrapper[4985]: I0128 20:07:06.425940 4985 generic.go:334] "Generic (PLEG): container finished" podID="6b22e0bb-441d-4cda-8e55-82ad8593f13c" containerID="ae043829729a5304a684bda1750cb3b2c47fa611ecf13670e0e552bc36940e3c" exitCode=0 Jan 28 20:07:06 crc kubenswrapper[4985]: I0128 20:07:06.426037 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" event={"ID":"6b22e0bb-441d-4cda-8e55-82ad8593f13c","Type":"ContainerDied","Data":"ae043829729a5304a684bda1750cb3b2c47fa611ecf13670e0e552bc36940e3c"} Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.582064 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.724973 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6sjh\" (UniqueName: \"kubernetes.io/projected/6b22e0bb-441d-4cda-8e55-82ad8593f13c-kube-api-access-v6sjh\") pod \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.725356 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b22e0bb-441d-4cda-8e55-82ad8593f13c-host\") pod \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.725469 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b22e0bb-441d-4cda-8e55-82ad8593f13c-host" (OuterVolumeSpecName: "host") pod "6b22e0bb-441d-4cda-8e55-82ad8593f13c" (UID: "6b22e0bb-441d-4cda-8e55-82ad8593f13c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.726027 4985 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b22e0bb-441d-4cda-8e55-82ad8593f13c-host\") on node \"crc\" DevicePath \"\"" Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.732104 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b22e0bb-441d-4cda-8e55-82ad8593f13c-kube-api-access-v6sjh" (OuterVolumeSpecName: "kube-api-access-v6sjh") pod "6b22e0bb-441d-4cda-8e55-82ad8593f13c" (UID: "6b22e0bb-441d-4cda-8e55-82ad8593f13c"). InnerVolumeSpecName "kube-api-access-v6sjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.828239 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6sjh\" (UniqueName: \"kubernetes.io/projected/6b22e0bb-441d-4cda-8e55-82ad8593f13c-kube-api-access-v6sjh\") on node \"crc\" DevicePath \"\"" Jan 28 20:07:08 crc kubenswrapper[4985]: I0128 20:07:08.345897 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-qpf2f"] Jan 28 20:07:08 crc kubenswrapper[4985]: I0128 20:07:08.356662 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-qpf2f"] Jan 28 20:07:08 crc kubenswrapper[4985]: I0128 20:07:08.455377 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4e76df559edd283a4370762cdcd629371fe973ac4826e5a9899565f84b4b3e3" Jan 28 20:07:08 crc kubenswrapper[4985]: I0128 20:07:08.455440 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.278862 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b22e0bb-441d-4cda-8e55-82ad8593f13c" path="/var/lib/kubelet/pods/6b22e0bb-441d-4cda-8e55-82ad8593f13c/volumes" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.561533 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-zr5mq"] Jan 28 20:07:09 crc kubenswrapper[4985]: E0128 20:07:09.562367 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b22e0bb-441d-4cda-8e55-82ad8593f13c" containerName="container-00" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.562385 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b22e0bb-441d-4cda-8e55-82ad8593f13c" containerName="container-00" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.562622 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.562653 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b22e0bb-441d-4cda-8e55-82ad8593f13c" containerName="container-00" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.563478 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.682716 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjhgr\" (UniqueName: \"kubernetes.io/projected/6ef092c5-c571-4b51-bd8d-16f348128393-kube-api-access-qjhgr\") pod \"crc-debug-zr5mq\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.682840 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef092c5-c571-4b51-bd8d-16f348128393-host\") pod \"crc-debug-zr5mq\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.787196 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjhgr\" (UniqueName: \"kubernetes.io/projected/6ef092c5-c571-4b51-bd8d-16f348128393-kube-api-access-qjhgr\") pod \"crc-debug-zr5mq\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.787358 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef092c5-c571-4b51-bd8d-16f348128393-host\") pod \"crc-debug-zr5mq\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.787488 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef092c5-c571-4b51-bd8d-16f348128393-host\") pod \"crc-debug-zr5mq\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.823654 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjhgr\" (UniqueName: \"kubernetes.io/projected/6ef092c5-c571-4b51-bd8d-16f348128393-kube-api-access-qjhgr\") pod \"crc-debug-zr5mq\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.896567 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: W0128 20:07:09.939262 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ef092c5_c571_4b51_bd8d_16f348128393.slice/crio-f3d7f1587c5736e90ae1ce34980089bf4a85618e1fb53002185047bfbae92c53 WatchSource:0}: Error finding container f3d7f1587c5736e90ae1ce34980089bf4a85618e1fb53002185047bfbae92c53: Status 404 returned error can't find the container with id f3d7f1587c5736e90ae1ce34980089bf4a85618e1fb53002185047bfbae92c53 Jan 28 20:07:10 crc kubenswrapper[4985]: I0128 20:07:10.481217 4985 generic.go:334] "Generic (PLEG): container finished" podID="6ef092c5-c571-4b51-bd8d-16f348128393" containerID="ee620cce9e13ced05e21107f3a230592d8cb95fd00ed4f37d416b23d67a3024d" exitCode=0 Jan 28 20:07:10 crc kubenswrapper[4985]: I0128 20:07:10.481288 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" event={"ID":"6ef092c5-c571-4b51-bd8d-16f348128393","Type":"ContainerDied","Data":"ee620cce9e13ced05e21107f3a230592d8cb95fd00ed4f37d416b23d67a3024d"} Jan 28 20:07:10 crc kubenswrapper[4985]: I0128 20:07:10.481312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" event={"ID":"6ef092c5-c571-4b51-bd8d-16f348128393","Type":"ContainerStarted","Data":"f3d7f1587c5736e90ae1ce34980089bf4a85618e1fb53002185047bfbae92c53"} Jan 28 20:07:10 crc kubenswrapper[4985]: I0128 20:07:10.522240 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-zr5mq"] Jan 28 20:07:10 crc kubenswrapper[4985]: I0128 20:07:10.532739 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-zr5mq"] Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.186590 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.187053 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.634652 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.736493 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjhgr\" (UniqueName: \"kubernetes.io/projected/6ef092c5-c571-4b51-bd8d-16f348128393-kube-api-access-qjhgr\") pod \"6ef092c5-c571-4b51-bd8d-16f348128393\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.736575 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef092c5-c571-4b51-bd8d-16f348128393-host\") pod \"6ef092c5-c571-4b51-bd8d-16f348128393\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.736940 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ef092c5-c571-4b51-bd8d-16f348128393-host" (OuterVolumeSpecName: "host") pod "6ef092c5-c571-4b51-bd8d-16f348128393" (UID: "6ef092c5-c571-4b51-bd8d-16f348128393"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.737695 4985 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef092c5-c571-4b51-bd8d-16f348128393-host\") on node \"crc\" DevicePath \"\"" Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.742809 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef092c5-c571-4b51-bd8d-16f348128393-kube-api-access-qjhgr" (OuterVolumeSpecName: "kube-api-access-qjhgr") pod "6ef092c5-c571-4b51-bd8d-16f348128393" (UID: "6ef092c5-c571-4b51-bd8d-16f348128393"). InnerVolumeSpecName "kube-api-access-qjhgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.839736 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjhgr\" (UniqueName: \"kubernetes.io/projected/6ef092c5-c571-4b51-bd8d-16f348128393-kube-api-access-qjhgr\") on node \"crc\" DevicePath \"\"" Jan 28 20:07:12 crc kubenswrapper[4985]: I0128 20:07:12.509017 4985 scope.go:117] "RemoveContainer" containerID="ee620cce9e13ced05e21107f3a230592d8cb95fd00ed4f37d416b23d67a3024d" Jan 28 20:07:12 crc kubenswrapper[4985]: I0128 20:07:12.509051 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:13 crc kubenswrapper[4985]: I0128 20:07:13.277010 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ef092c5-c571-4b51-bd8d-16f348128393" path="/var/lib/kubelet/pods/6ef092c5-c571-4b51-bd8d-16f348128393/volumes" Jan 28 20:07:23 crc kubenswrapper[4985]: I0128 20:07:23.772572 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 20:07:23 crc kubenswrapper[4985]: I0128 20:07:23.776782 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.211689 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_9f75cd8d-6a02-43e4-8e58-92f8d024311b/aodh-api/0.log" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.404009 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_9f75cd8d-6a02-43e4-8e58-92f8d024311b/aodh-listener/0.log" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.407119 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_9f75cd8d-6a02-43e4-8e58-92f8d024311b/aodh-evaluator/0.log" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.646835 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_9f75cd8d-6a02-43e4-8e58-92f8d024311b/aodh-notifier/0.log" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.791481 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-668ffb7f9d-shvfm_04b28283-6f65-478e-952d-f965423f413e/barbican-api-log/0.log" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.813279 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-668ffb7f9d-shvfm_04b28283-6f65-478e-952d-f965423f413e/barbican-api/0.log" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.937150 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cc6bcfccd-rh55k_f4b18150-cbd6-4c6f-a28b-8c66b1e875f2/barbican-keystone-listener/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.090132 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cc6bcfccd-rh55k_f4b18150-cbd6-4c6f-a28b-8c66b1e875f2/barbican-keystone-listener-log/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.178778 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c84c9469f-9xntt_d885ddad-ecc9-4b73-ad9e-9da819f95107/barbican-worker/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.214646 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c84c9469f-9xntt_d885ddad-ecc9-4b73-ad9e-9da819f95107/barbican-worker-log/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.378073 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx_3865f1db-f707-4b28-bbf2-8ce1975baa1f/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.430918 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b29b2a3b-ca12-4e1c-8816-0d28cebe2dde/ceilometer-central-agent/1.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.621179 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b29b2a3b-ca12-4e1c-8816-0d28cebe2dde/ceilometer-central-agent/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.632181 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b29b2a3b-ca12-4e1c-8816-0d28cebe2dde/ceilometer-notification-agent/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.659871 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b29b2a3b-ca12-4e1c-8816-0d28cebe2dde/proxy-httpd/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.676616 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b29b2a3b-ca12-4e1c-8816-0d28cebe2dde/sg-core/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.873242 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_841350c5-b9e8-4331-9282-e129f8152153/cinder-api-log/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.924029 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_841350c5-b9e8-4331-9282-e129f8152153/cinder-api/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.122456 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_07cf4e1d-9eb6-491a-90a5-dc30af589bc0/cinder-scheduler/1.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.183965 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_07cf4e1d-9eb6-491a-90a5-dc30af589bc0/cinder-scheduler/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.206971 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_07cf4e1d-9eb6-491a-90a5-dc30af589bc0/probe/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.366636 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn_ed5a5127-7214-4f45-bda0-a1c6ecbaaede/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.471109 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc_89fa72dd-7320-41fe-8df4-161d84d41b84/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.593749 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-jqtwd_63ee6cb7-f768-47d8-a266-e1e6ca6926ea/init/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.754743 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-jqtwd_63ee6cb7-f768-47d8-a266-e1e6ca6926ea/init/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.803048 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-42d8l_fbfc48e7-8a35-4fc6-b9fd-0c1735864116/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.855535 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-jqtwd_63ee6cb7-f768-47d8-a266-e1e6ca6926ea/dnsmasq-dns/0.log" Jan 28 20:07:39 crc kubenswrapper[4985]: I0128 20:07:39.061814 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9ff4e22d-1c99-4c30-9eaa-3225c1e868c7/glance-httpd/0.log" Jan 28 20:07:39 crc kubenswrapper[4985]: I0128 20:07:39.095577 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9ff4e22d-1c99-4c30-9eaa-3225c1e868c7/glance-log/0.log" Jan 28 20:07:39 crc kubenswrapper[4985]: I0128 20:07:39.222015 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d7b0993c-0b43-44d7-8498-6808f2a1439e/glance-httpd/0.log" Jan 28 20:07:39 crc kubenswrapper[4985]: I0128 20:07:39.297523 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d7b0993c-0b43-44d7-8498-6808f2a1439e/glance-log/0.log" Jan 28 20:07:39 crc kubenswrapper[4985]: I0128 20:07:39.915836 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5df4f6c8f9-fvvqb_45d84233-dc44-4b3c-8aaa-f08ab50c0512/heat-engine/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.121055 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl_50ce12a8-7d79-4fa2-a879-e3082ba41427/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.267874 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-9d696c4dd-qgm9g_f91275ab-50ad-4d69-953f-764ccd276927/heat-api/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.310139 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-76b7548687-cmjrr_c761ae73-94d1-46be-afe6-1232e2c589ff/heat-cfnapi/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.363307 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-25775_3baf8df5-1989-4678-8268-058f46511cfd/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.623759 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29493781-6kphz_7635ee1a-7676-44ad-af7f-ebfab7b56933/keystone-cron/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.831470 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29493841-rkhj6_c901d430-df5f-4afa-8a40-9ed18d2ad552/keystone-cron/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.871548 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e/kube-state-metrics/1.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.979876 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-77c7879f98-bcrvp_d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b/keystone-api/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.016870 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e/kube-state-metrics/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.097019 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-swns9_05f3f537-0392-45c7-af0d-36294670ed29/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.166300 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-wn6r7_c6c90c6c-aa78-4215-9c43-acd22891abfb/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.186151 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.186219 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.397780 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_6b1f6dd4-6d66-4f40-879f-5f0af3845842/mysqld-exporter/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.579507 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f49f9645f-bs9wr_2177b5b3-0121-4ff8-93dd-2f9ef36560f4/neutron-api/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.593024 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_88fe31db-8414-43ac-b547-fa0278d9508f/memcached/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.666847 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f49f9645f-bs9wr_2177b5b3-0121-4ff8-93dd-2f9ef36560f4/neutron-httpd/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.712109 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr_85887caf-94f1-4f74-820c-edba2628a8e6/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.128291 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_78b595e2-b61a-4921-8d69-28adfa53f6bb/nova-cell0-conductor-conductor/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.261181 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_11eaf6b3-7169-4587-af33-68f04428e630/nova-api-log/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.321564 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_bbb020dd-95f1-4d78-9899-9fd0eca60584/nova-cell1-conductor-conductor/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.485614 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4e0bd087-7446-45b4-858b-7b514713d4fe/nova-cell1-novncproxy-novncproxy/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.597166 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-68wk4_b129af39-361b-4dba-bdbb-31531c3a2ce9/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.663090 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_11eaf6b3-7169-4587-af33-68f04428e630/nova-api-api/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.728547 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7d99eaa1-3945-4192-9d61-7668d944bc63/nova-metadata-log/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.925775 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8253e52-6b52-45a9-b5d6-680d3dfbebe7/mysql-bootstrap/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.029330 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_bdade9ba-ba1b-4093-bc40-73f68c84615f/nova-scheduler-scheduler/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.267433 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8253e52-6b52-45a9-b5d6-680d3dfbebe7/mysql-bootstrap/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.274051 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8253e52-6b52-45a9-b5d6-680d3dfbebe7/galera/1.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.303007 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8253e52-6b52-45a9-b5d6-680d3dfbebe7/galera/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.467440 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8/mysql-bootstrap/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.769809 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8/galera/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.780948 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8/mysql-bootstrap/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.840574 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8/galera/1.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.008966 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1d8f391e-0ed3-4969-b61b-5b9d602644fa/openstackclient/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.105813 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9r84t_2d1c1ab5-7e43-47cd-8218-3d945574a79c/ovn-controller/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.453169 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vsdt5_d67712df-b1fe-463f-9a6c-c0591aa6cec2/openstack-network-exporter/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.461601 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f287q_2c181f14-26b7-49f4-9ae0-869d9b291938/ovsdb-server-init/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.588829 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7d99eaa1-3945-4192-9d61-7668d944bc63/nova-metadata-metadata/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.724164 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f287q_2c181f14-26b7-49f4-9ae0-869d9b291938/ovs-vswitchd/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.751714 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f287q_2c181f14-26b7-49f4-9ae0-869d9b291938/ovsdb-server-init/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.762468 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f287q_2c181f14-26b7-49f4-9ae0-869d9b291938/ovsdb-server/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.824148 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-h47tw_7b281922-4bb4-45f8-b633-d82925f4814e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.949453 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_76a14385-7b25-48b8-8614-1a77892a1119/openstack-network-exporter/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.979365 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_76a14385-7b25-48b8-8614-1a77892a1119/ovn-northd/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.041601 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_76ff3fb3-d9e1-41dc-a644-8ac29cb97d11/openstack-network-exporter/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.134396 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_76ff3fb3-d9e1-41dc-a644-8ac29cb97d11/ovsdbserver-nb/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.183559 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6e1c7625-25e1-442f-9f71-5d2a9323306c/openstack-network-exporter/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.204112 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6e1c7625-25e1-442f-9f71-5d2a9323306c/ovsdbserver-sb/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.373658 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-848676699d-9lbcr_cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1/placement-api/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.448592 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3d356801-0ed0-4343-87a9-29d23453d621/init-config-reloader/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.509618 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-848676699d-9lbcr_cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1/placement-log/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.653331 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3d356801-0ed0-4343-87a9-29d23453d621/config-reloader/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.654029 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3d356801-0ed0-4343-87a9-29d23453d621/prometheus/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.660064 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3d356801-0ed0-4343-87a9-29d23453d621/init-config-reloader/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.692920 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3d356801-0ed0-4343-87a9-29d23453d621/thanos-sidecar/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.829354 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_34d82dad-dc98-4c0f-90c2-0b25f7d73c01/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.031531 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_34d82dad-dc98-4c0f-90c2-0b25f7d73c01/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.073656 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_34d82dad-dc98-4c0f-90c2-0b25f7d73c01/rabbitmq/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.098360 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.374953 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.378104 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_ae555e00-c2df-4fce-af07-a91133f8767d/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.433582 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe/rabbitmq/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.647932 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_ae555e00-c2df-4fce-af07-a91133f8767d/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.712095 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_249a0e05-d210-402f-b7f8-2caf153346d8/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.729602 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_ae555e00-c2df-4fce-af07-a91133f8767d/rabbitmq/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.890077 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_249a0e05-d210-402f-b7f8-2caf153346d8/setup-container/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.063523 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_249a0e05-d210-402f-b7f8-2caf153346d8/rabbitmq/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.084151 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb_b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.164993 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xgv8j_3b94af3f-603c-4a3e-966e-7a4bfbc78178/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.267627 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk_7a5d3484-2192-44a6-b632-5a683af945d6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.402337 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8kf5l_748912b6-cdb7-40bc-875e-563d7913a6dd/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.513808 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pbrcd_99c460d4-80df-4aac-9fc5-20198855b361/ssh-known-hosts-edpm-deployment/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.672076 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5bdcb887dc-rxkm6_12d4e4cf-9153-4a32-9155-f9d13a248a26/proxy-server/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.749220 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5bdcb887dc-rxkm6_12d4e4cf-9153-4a32-9155-f9d13a248a26/proxy-httpd/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.991916 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-l4q82_75109476-5e36-45b8-afb9-1e7f3a9331f9/swift-ring-rebalance/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.134636 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/account-auditor/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.168694 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/account-reaper/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.213039 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/account-server/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.239419 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/account-replicator/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.342330 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/container-auditor/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.424682 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/container-server/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.431561 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/container-updater/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.437759 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/container-replicator/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.530289 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/object-auditor/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.556916 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/object-expirer/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.601472 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/object-server/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.616718 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/object-updater/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.632907 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/object-replicator/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.715698 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/rsync/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.770298 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/swift-recon-cron/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.848943 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-lhknq_557f8a1e-1a37-47a3-aa41-7222181ea137/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.965602 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls_d9d4a4e3-9f29-45a2-9748-d133f122af06/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:49 crc kubenswrapper[4985]: I0128 20:07:49.156411 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e5d86a77-6a87-4434-b571-f453639eb3a2/test-operator-logs-container/0.log" Jan 28 20:07:49 crc kubenswrapper[4985]: I0128 20:07:49.413507 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5h28l_ae55970b-52a8-4bd7-8d82-853e9cd4ad32/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:49 crc kubenswrapper[4985]: I0128 20:07:49.436070 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a808dc72-a951-4f07-a612-2fde39a49a30/tempest-tests-tempest-tests-runner/0.log" Jan 28 20:08:11 crc kubenswrapper[4985]: I0128 20:08:11.185562 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:08:11 crc kubenswrapper[4985]: I0128 20:08:11.186168 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 20:08:11 crc kubenswrapper[4985]: I0128 20:08:11.186217 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 20:08:11 crc kubenswrapper[4985]: I0128 20:08:11.215664 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"feb11cf010e066de1428423731282f1a1bf65ec6e9b804a07c16b386b1f6b3a9"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 20:08:11 crc kubenswrapper[4985]: I0128 20:08:11.215778 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://feb11cf010e066de1428423731282f1a1bf65ec6e9b804a07c16b386b1f6b3a9" gracePeriod=600 Jan 28 20:08:12 crc kubenswrapper[4985]: I0128 20:08:12.213828 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="feb11cf010e066de1428423731282f1a1bf65ec6e9b804a07c16b386b1f6b3a9" exitCode=0 Jan 28 20:08:12 crc kubenswrapper[4985]: I0128 20:08:12.213927 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"feb11cf010e066de1428423731282f1a1bf65ec6e9b804a07c16b386b1f6b3a9"} Jan 28 20:08:12 crc kubenswrapper[4985]: I0128 20:08:12.214510 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a"} Jan 28 20:08:12 crc kubenswrapper[4985]: I0128 20:08:12.214545 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:08:14 crc kubenswrapper[4985]: I0128 20:08:14.801082 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/util/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.080176 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/util/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.101662 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/pull/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.147509 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/pull/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.259766 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/util/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.286791 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/pull/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.328012 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/extract/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.554701 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-ww4nj_4fa1b302-aad3-4e6e-9cd2-bba65262c1e8/manager/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.573320 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-7gfrh_7ef21481-ade5-436a-ae3a-f284a7e438d3/manager/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.686350 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-75d84_4dfb4621-d061-4224-8aee-840726565aa3/manager/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.875000 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-6bdmh_99893bb5-33ef-4159-bf8f-1c79a58e74d9/manager/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.887319 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-fm7nr_cc7f29e1-e6e0-45a0-920a-4b18d8204c65/manager/1.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.066265 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-6skp6_99b88683-3e0a-4afa-91ab-71feac27fba1/manager/1.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.081987 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-fm7nr_cc7f29e1-e6e0-45a0-920a-4b18d8204c65/manager/0.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.129703 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-6skp6_99b88683-3e0a-4afa-91ab-71feac27fba1/manager/0.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.318606 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-5zqpj_697da6ae-2950-468c-82e9-bcb1a1af61e7/manager/1.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.495675 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-s2n6z_75e682e9-e5a5-47f1-83cc-c8004ebe224a/manager/0.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.636992 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-hktv5_b5a0c28d-1434-40f0-8759-d76b65dc2c30/manager/1.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.639397 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-5zqpj_697da6ae-2950-468c-82e9-bcb1a1af61e7/manager/0.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.786946 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-hktv5_b5a0c28d-1434-40f0-8759-d76b65dc2c30/manager/0.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.890628 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-9lm5f_654a2c56-81a7-4b32-ad1d-c4d60b054b47/manager/0.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.008088 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-rbn84_9897766d-6497-4d0e-bd9a-ef8e31a08e24/manager/0.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.211194 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-dlssr_873dc5cd-5c8e-417e-b99a-a52dfcfd701b/manager/0.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.254993 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-7mtzf_9c7284ab-b40f-4275-b85e-77aebd660135/manager/1.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.407114 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-7mtzf_9c7284ab-b40f-4275-b85e-77aebd660135/manager/0.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.409487 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-4smn2_367b6525-0367-437a-9fe3-b2007411f4af/manager/1.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.500884 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-4smn2_367b6525-0367-437a-9fe3-b2007411f4af/manager/0.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.589319 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz_70329607-4bbe-43ad-bb7a-2b62f26af473/manager/1.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.662178 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz_70329607-4bbe-43ad-bb7a-2b62f26af473/manager/0.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.804297 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-687c66fd56-xdvhx_82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62/operator/1.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.957168 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-687c66fd56-xdvhx_82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62/operator/0.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.164852 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wnjfp_3314cb32-9bb8-46fd-b28e-5a6e9b779fa7/registry-server/1.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.285759 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wnjfp_3314cb32-9bb8-46fd-b28e-5a6e9b779fa7/registry-server/0.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.532366 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-v5mmf_50682373-a3d7-491e-84a0-1d5613ee2e8a/manager/1.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.563507 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-v5mmf_50682373-a3d7-491e-84a0-1d5613ee2e8a/manager/0.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.736086 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-qn5x9_91971c24-6187-432c-84ba-65dba69b4598/manager/1.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.760240 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-qn5x9_91971c24-6187-432c-84ba-65dba69b4598/manager/0.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.930741 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7s7s2_38846228-cec9-4a59-b9bb-c766121dacde/operator/1.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.117919 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7s7s2_38846228-cec9-4a59-b9bb-c766121dacde/operator/0.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.146115 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-9kbdr_c95374e8-7d41-4a49-add9-7f28196d70eb/manager/0.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.345793 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-68b9ccc946-rk65w_c1e8524e-e047-4872-9ee1-ae4e013f8825/manager/0.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.378223 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-74c974475f-b9j67_359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3/manager/1.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.592454 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-xwzkh_1310770f-7cb7-4874-b2a0-4ef733911716/manager/1.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.645522 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-xwzkh_1310770f-7cb7-4874-b2a0-4ef733911716/manager/0.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.675641 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-xzkhh_d4d6e990-839d-4186-9382-1a67922556df/manager/1.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.708543 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-74c974475f-b9j67_359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3/manager/0.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.787589 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-xzkhh_d4d6e990-839d-4186-9382-1a67922556df/manager/0.log" Jan 28 20:08:40 crc kubenswrapper[4985]: I0128 20:08:40.021063 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wp27s_7f89cfdf-2a4d-4582-94f4-e53c45c3e09c/control-plane-machine-set-operator/0.log" Jan 28 20:08:40 crc kubenswrapper[4985]: I0128 20:08:40.205520 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hjjf7_218b57d8-c3a3-4a33-a3ef-6701cf557911/kube-rbac-proxy/0.log" Jan 28 20:08:40 crc kubenswrapper[4985]: I0128 20:08:40.262511 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hjjf7_218b57d8-c3a3-4a33-a3ef-6701cf557911/machine-api-operator/0.log" Jan 28 20:08:49 crc kubenswrapper[4985]: I0128 20:08:49.546311 4985 scope.go:117] "RemoveContainer" containerID="eaa8b31fd567cbe5402dee337791c77b7d17c2a64b306b5f934b501e7555c359" Jan 28 20:08:49 crc kubenswrapper[4985]: I0128 20:08:49.596348 4985 scope.go:117] "RemoveContainer" containerID="5651818473f4b98cbff41942fcaaaa5a4dff77b8a26838075287437237018599" Jan 28 20:08:49 crc kubenswrapper[4985]: I0128 20:08:49.633563 4985 scope.go:117] "RemoveContainer" containerID="6aae3f87a8a75e8de0eb7f2174fb7e1ad791b3b13463186c8a127596ad993426" Jan 28 20:08:55 crc kubenswrapper[4985]: I0128 20:08:55.633901 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-dzhtm_4f9db9b6-ec43-4789-9efd-f2d4831c67e8/cert-manager-controller/0.log" Jan 28 20:08:55 crc kubenswrapper[4985]: I0128 20:08:55.800799 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-bcvwj_aa962965-4b70-40f4-8400-b7ff2ec182e9/cert-manager-cainjector/0.log" Jan 28 20:08:55 crc kubenswrapper[4985]: I0128 20:08:55.868289 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-mwrk6_26777afd-4d9f-4ebb-b8ed-0be018fa5a17/cert-manager-webhook/1.log" Jan 28 20:08:55 crc kubenswrapper[4985]: I0128 20:08:55.909434 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-mwrk6_26777afd-4d9f-4ebb-b8ed-0be018fa5a17/cert-manager-webhook/0.log" Jan 28 20:09:11 crc kubenswrapper[4985]: I0128 20:09:11.151953 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-slwkn_b866e710-8894-47da-9251-4118fec613bd/nmstate-console-plugin/0.log" Jan 28 20:09:11 crc kubenswrapper[4985]: I0128 20:09:11.347894 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-gkjzc_8f0319d2-9602-42b4-a3fb-c53bf5d3c244/nmstate-handler/0.log" Jan 28 20:09:11 crc kubenswrapper[4985]: I0128 20:09:11.387820 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vznlg_05eeb2e4-510c-4b66-addf-efaddce8cfb0/kube-rbac-proxy/0.log" Jan 28 20:09:11 crc kubenswrapper[4985]: I0128 20:09:11.408701 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vznlg_05eeb2e4-510c-4b66-addf-efaddce8cfb0/nmstate-metrics/0.log" Jan 28 20:09:11 crc kubenswrapper[4985]: I0128 20:09:11.561058 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-ztr6n_e130755a-0d4d-4efd-a08a-a3bda72ff4cf/nmstate-operator/0.log" Jan 28 20:09:11 crc kubenswrapper[4985]: I0128 20:09:11.626045 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-jrf9w_645ec0ef-97a6-4e2f-b691-ffcbcab4eed7/nmstate-webhook/0.log" Jan 28 20:09:26 crc kubenswrapper[4985]: I0128 20:09:26.458189 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fc96dbd6-9qljj_fc080bc5-4b4f-4405-b458-7450aaf8714b/kube-rbac-proxy/0.log" Jan 28 20:09:26 crc kubenswrapper[4985]: I0128 20:09:26.539985 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fc96dbd6-9qljj_fc080bc5-4b4f-4405-b458-7450aaf8714b/manager/1.log" Jan 28 20:09:26 crc kubenswrapper[4985]: I0128 20:09:26.651690 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fc96dbd6-9qljj_fc080bc5-4b4f-4405-b458-7450aaf8714b/manager/0.log" Jan 28 20:09:40 crc kubenswrapper[4985]: I0128 20:09:40.501054 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-s9875_74fbf9d6-ccb4-4d90-9db8-2d4613334d81/prometheus-operator/0.log" Jan 28 20:09:40 crc kubenswrapper[4985]: I0128 20:09:40.727965 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_23ef5df5-bfbe-4465-8e87-d69896bf70aa/prometheus-operator-admission-webhook/0.log" Jan 28 20:09:40 crc kubenswrapper[4985]: I0128 20:09:40.833733 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_e192375e-5db5-46e4-922b-21b8bc5698ba/prometheus-operator-admission-webhook/0.log" Jan 28 20:09:40 crc kubenswrapper[4985]: I0128 20:09:40.949963 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-nfhqj_a23ac89d-75e4-4511-afaa-ef9d6205a672/operator/1.log" Jan 28 20:09:40 crc kubenswrapper[4985]: I0128 20:09:40.974439 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-nfhqj_a23ac89d-75e4-4511-afaa-ef9d6205a672/operator/0.log" Jan 28 20:09:41 crc kubenswrapper[4985]: I0128 20:09:41.085161 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-5w5dn_c9b84394-02f1-4bde-befe-a2a649925c93/observability-ui-dashboards/0.log" Jan 28 20:09:41 crc kubenswrapper[4985]: I0128 20:09:41.217813 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-j7z4h_971845b8-805d-4b4a-a8fd-14f263f17695/perses-operator/0.log" Jan 28 20:09:45 crc kubenswrapper[4985]: I0128 20:09:45.620219 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" podUID="12d4e4cf-9153-4a32-9155-f9d13a248a26" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.314307 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-d28w5_4db97b28-803f-4b66-9322-f210440517ff/cluster-logging-operator/0.log" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.465210 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-gthjs_be7250ed-2e5a-403a-abfa-f1855e86ae44/collector/0.log" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.514671 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_ac72f54d-936d-4c98-9f91-918f7a05b5d1/loki-compactor/0.log" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.675885 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-2755m_effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb/loki-distributor/0.log" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.789912 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76696895d9-c6d96_02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b/gateway/0.log" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.850594 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76696895d9-c6d96_02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b/opa/0.log" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.982000 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76696895d9-g5tqr_ae6864ac-d6e2-4d85-aa84-361f51b944eb/gateway/0.log" Jan 28 20:10:00 crc kubenswrapper[4985]: I0128 20:10:00.091654 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76696895d9-g5tqr_ae6864ac-d6e2-4d85-aa84-361f51b944eb/opa/0.log" Jan 28 20:10:00 crc kubenswrapper[4985]: I0128 20:10:00.108785 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_664a7afe-25ae-45f8-81bd-9a9c59c431cd/loki-index-gateway/0.log" Jan 28 20:10:00 crc kubenswrapper[4985]: I0128 20:10:00.358209 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-dkn9m_21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7/loki-querier/0.log" Jan 28 20:10:00 crc kubenswrapper[4985]: I0128 20:10:00.359168 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_e322915e-933c-4de4-98dd-ef047ee5b056/loki-ingester/0.log" Jan 28 20:10:00 crc kubenswrapper[4985]: I0128 20:10:00.540869 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-pcd6x_5c56d4fe-62c7-47ef-9a0f-607d899d19b8/loki-query-frontend/0.log" Jan 28 20:10:11 crc kubenswrapper[4985]: I0128 20:10:11.186428 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:10:11 crc kubenswrapper[4985]: I0128 20:10:11.187003 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 20:10:16 crc kubenswrapper[4985]: I0128 20:10:16.892465 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8f79k_5fd77adb-e801-4d3f-ac61-64615952aebd/controller/1.log" Jan 28 20:10:16 crc kubenswrapper[4985]: I0128 20:10:16.998506 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8f79k_5fd77adb-e801-4d3f-ac61-64615952aebd/kube-rbac-proxy/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.003084 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8f79k_5fd77adb-e801-4d3f-ac61-64615952aebd/controller/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.173539 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-frr-files/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.348818 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-metrics/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.377326 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-reloader/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.377636 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-frr-files/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.432845 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-reloader/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.589269 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-frr-files/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.638511 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-metrics/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.668045 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-reloader/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.676907 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-metrics/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.829747 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-reloader/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.864618 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/controller/1.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.868388 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-frr-files/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.875988 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-metrics/0.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.029758 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/controller/0.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.117952 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/frr/1.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.152577 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/frr-metrics/0.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.297106 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/kube-rbac-proxy/0.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.618549 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/kube-rbac-proxy-frr/0.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.722944 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/reloader/0.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.843396 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-szgpw_f6ebe169-8b20-4d94-99b7-96afffcb5118/frr-k8s-webhook-server/1.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.037477 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-szgpw_f6ebe169-8b20-4d94-99b7-96afffcb5118/frr-k8s-webhook-server/0.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.087596 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-74b956d56f-86jl5_c77a825c-f720-48a7-b74f-49b16e3ecbed/manager/1.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.374331 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fd7b78bd4-c2clz_57ef54a5-9891-4f69-9907-b726d30d4006/webhook-server/1.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.406483 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-74b956d56f-86jl5_c77a825c-f720-48a7-b74f-49b16e3ecbed/manager/0.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.619628 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/frr/0.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.630440 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fd7b78bd4-c2clz_57ef54a5-9891-4f69-9907-b726d30d4006/webhook-server/0.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.703823 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6lq6d_b5094b56-07e5-45db-8a13-ce7b931b861e/kube-rbac-proxy/0.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.992686 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6lq6d_b5094b56-07e5-45db-8a13-ce7b931b861e/speaker/1.log" Jan 28 20:10:20 crc kubenswrapper[4985]: I0128 20:10:20.285912 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6lq6d_b5094b56-07e5-45db-8a13-ce7b931b861e/speaker/0.log" Jan 28 20:10:34 crc kubenswrapper[4985]: I0128 20:10:34.793765 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/util/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.064730 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/util/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.073040 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/pull/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.113379 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/pull/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.225615 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/util/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.296241 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/extract/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.300996 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/pull/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.438132 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/util/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.687922 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/util/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.735640 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/pull/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.740928 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/pull/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.878841 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/pull/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.911519 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/util/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.949019 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/extract/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.064593 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/util/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.316151 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/pull/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.351290 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/util/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.354676 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/pull/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.533830 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/util/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.560392 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/pull/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.606618 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/extract/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.718078 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/util/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.007371 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/pull/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.019685 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/pull/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.042832 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/util/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.246692 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/extract/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.266401 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/util/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.271607 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/pull/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.465730 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/util/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.663942 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/pull/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.675791 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/pull/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.704168 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/util/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.910269 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/pull/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.914506 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/util/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.918746 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/extract/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.125764 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/extract-utilities/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.330201 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/extract-content/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.338945 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/extract-utilities/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.339613 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/extract-content/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.834579 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/extract-content/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.848677 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/extract-utilities/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.854956 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/extract-utilities/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.958610 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/registry-server/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.129894 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/extract-utilities/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.136037 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/extract-content/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.140581 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/extract-content/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.296367 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/extract-utilities/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.309849 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/extract-content/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.410842 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/registry-server/1.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.607447 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hvkcw_4845499d-139f-4839-9f9f-4d77c7f0ae37/marketplace-operator/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.625190 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hvkcw_4845499d-139f-4839-9f9f-4d77c7f0ae37/marketplace-operator/1.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.724435 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/extract-utilities/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.939017 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/extract-content/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.962872 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/extract-utilities/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.968133 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/extract-content/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.252599 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/extract-content/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.260151 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/extract-utilities/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.362867 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/registry-server/1.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.543480 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/registry-server/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.585748 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/registry-server/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.664436 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/extract-utilities/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.838350 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/extract-utilities/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.864389 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/extract-content/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.880377 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/extract-content/0.log" Jan 28 20:10:41 crc kubenswrapper[4985]: I0128 20:10:41.066461 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/extract-utilities/0.log" Jan 28 20:10:41 crc kubenswrapper[4985]: I0128 20:10:41.084952 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/extract-content/0.log" Jan 28 20:10:41 crc kubenswrapper[4985]: I0128 20:10:41.185624 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:10:41 crc kubenswrapper[4985]: I0128 20:10:41.185682 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 20:10:42 crc kubenswrapper[4985]: I0128 20:10:42.181847 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/registry-server/0.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.151296 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_23ef5df5-bfbe-4465-8e87-d69896bf70aa/prometheus-operator-admission-webhook/0.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.168853 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-s9875_74fbf9d6-ccb4-4d90-9db8-2d4613334d81/prometheus-operator/0.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.204431 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_e192375e-5db5-46e4-922b-21b8bc5698ba/prometheus-operator-admission-webhook/0.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.409323 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-nfhqj_a23ac89d-75e4-4511-afaa-ef9d6205a672/operator/1.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.425106 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-nfhqj_a23ac89d-75e4-4511-afaa-ef9d6205a672/operator/0.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.461928 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-5w5dn_c9b84394-02f1-4bde-befe-a2a649925c93/observability-ui-dashboards/0.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.540499 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-j7z4h_971845b8-805d-4b4a-a8fd-14f263f17695/perses-operator/0.log" Jan 28 20:11:09 crc kubenswrapper[4985]: I0128 20:11:09.089177 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fc96dbd6-9qljj_fc080bc5-4b4f-4405-b458-7450aaf8714b/kube-rbac-proxy/0.log" Jan 28 20:11:09 crc kubenswrapper[4985]: I0128 20:11:09.223138 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fc96dbd6-9qljj_fc080bc5-4b4f-4405-b458-7450aaf8714b/manager/1.log" Jan 28 20:11:09 crc kubenswrapper[4985]: I0128 20:11:09.225632 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fc96dbd6-9qljj_fc080bc5-4b4f-4405-b458-7450aaf8714b/manager/0.log" Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.186374 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.186946 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.186992 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.187987 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.188066 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" gracePeriod=600 Jan 28 20:11:11 crc kubenswrapper[4985]: E0128 20:11:11.308653 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.544921 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" exitCode=0 Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.544972 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a"} Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.545010 4985 scope.go:117] "RemoveContainer" containerID="feb11cf010e066de1428423731282f1a1bf65ec6e9b804a07c16b386b1f6b3a9" Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.545746 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:11:11 crc kubenswrapper[4985]: E0128 20:11:11.546094 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:11:25 crc kubenswrapper[4985]: I0128 20:11:25.263948 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:11:25 crc kubenswrapper[4985]: E0128 20:11:25.264875 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:11:35 crc kubenswrapper[4985]: I0128 20:11:35.012321 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.670921524s: [/var/lib/containers/storage/overlay/2f12c37c8eb1e2c5e02f58419690d5a8b196e336584f7ad4540ca4dbdf5fe0b9/diff /var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-7mtzf_9c7284ab-b40f-4275-b85e-77aebd660135/manager/1.log]; will not log again for this container unless duration exceeds 2s Jan 28 20:11:35 crc kubenswrapper[4985]: I0128 20:11:35.013034 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.300428756s: [/var/lib/containers/storage/overlay/b7e64f0091f970033e5ed5c0641d5b64ec853c9c21c50a8609f6bef14f51773c/diff /var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-fm7nr_cc7f29e1-e6e0-45a0-920a-4b18d8204c65/manager/1.log]; will not log again for this container unless duration exceeds 2s Jan 28 20:11:35 crc kubenswrapper[4985]: I0128 20:11:35.014413 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.299472899s: [/var/lib/containers/storage/overlay/1e93255d8360cc04907058d529d9a0ce9a7d586b97b0f6d04d1301099232bc13/diff /var/log/pods/openstack_heat-engine-5df4f6c8f9-fvvqb_45d84233-dc44-4b3c-8aaa-f08ab50c0512/heat-engine/0.log]; will not log again for this container unless duration exceeds 2s Jan 28 20:11:35 crc kubenswrapper[4985]: I0128 20:11:35.016220 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.22469184s: [/var/lib/containers/storage/overlay/256b396208fda6cd62f0180af4b905a209625c70c4b22876c86c69eaf719a8d8/diff /var/log/pods/openstack_swift-proxy-5bdcb887dc-rxkm6_12d4e4cf-9153-4a32-9155-f9d13a248a26/proxy-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 28 20:11:36 crc kubenswrapper[4985]: I0128 20:11:36.264170 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:11:36 crc kubenswrapper[4985]: E0128 20:11:36.264724 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:11:48 crc kubenswrapper[4985]: I0128 20:11:48.264128 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:11:48 crc kubenswrapper[4985]: E0128 20:11:48.265272 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:11:59 crc kubenswrapper[4985]: I0128 20:11:59.264432 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:11:59 crc kubenswrapper[4985]: E0128 20:11:59.265921 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:12:11 crc kubenswrapper[4985]: I0128 20:12:11.281308 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:12:11 crc kubenswrapper[4985]: E0128 20:12:11.282625 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:12:25 crc kubenswrapper[4985]: I0128 20:12:25.265045 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:12:25 crc kubenswrapper[4985]: E0128 20:12:25.266785 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:12:37 crc kubenswrapper[4985]: I0128 20:12:37.263967 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:12:37 crc kubenswrapper[4985]: E0128 20:12:37.264919 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:12:49 crc kubenswrapper[4985]: I0128 20:12:49.264285 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:12:49 crc kubenswrapper[4985]: E0128 20:12:49.264912 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:12:49 crc kubenswrapper[4985]: I0128 20:12:49.830293 4985 scope.go:117] "RemoveContainer" containerID="6f9e46511089ed1317a6f65cf916f19a8e3ebe9ec1c94201d055df23d13e16ad" Jan 28 20:13:02 crc kubenswrapper[4985]: I0128 20:13:02.264541 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:13:02 crc kubenswrapper[4985]: E0128 20:13:02.265363 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:13:10 crc kubenswrapper[4985]: I0128 20:13:10.131035 4985 generic.go:334] "Generic (PLEG): container finished" podID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerID="0f940a9e21cc7bcb3783698fe185a88cc577a4e11e2a41301793da71c8090629" exitCode=0 Jan 28 20:13:10 crc kubenswrapper[4985]: I0128 20:13:10.131600 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" event={"ID":"b1ab1977-13f1-41b6-9edd-cbb936fb8485","Type":"ContainerDied","Data":"0f940a9e21cc7bcb3783698fe185a88cc577a4e11e2a41301793da71c8090629"} Jan 28 20:13:10 crc kubenswrapper[4985]: I0128 20:13:10.132498 4985 scope.go:117] "RemoveContainer" containerID="0f940a9e21cc7bcb3783698fe185a88cc577a4e11e2a41301793da71c8090629" Jan 28 20:13:10 crc kubenswrapper[4985]: I0128 20:13:10.568868 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sg6vz_must-gather-9vwtc_b1ab1977-13f1-41b6-9edd-cbb936fb8485/gather/0.log" Jan 28 20:13:15 crc kubenswrapper[4985]: I0128 20:13:15.264666 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:13:15 crc kubenswrapper[4985]: E0128 20:13:15.265390 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:13:18 crc kubenswrapper[4985]: I0128 20:13:18.672664 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sg6vz/must-gather-9vwtc"] Jan 28 20:13:18 crc kubenswrapper[4985]: I0128 20:13:18.673455 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="copy" containerID="cri-o://5355598335d0d9dff197dc4d09b9b325ee69e3336b9f5be9371d1aa865456367" gracePeriod=2 Jan 28 20:13:18 crc kubenswrapper[4985]: I0128 20:13:18.686994 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sg6vz/must-gather-9vwtc"] Jan 28 20:13:19 crc kubenswrapper[4985]: I0128 20:13:19.271588 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sg6vz_must-gather-9vwtc_b1ab1977-13f1-41b6-9edd-cbb936fb8485/copy/0.log" Jan 28 20:13:19 crc kubenswrapper[4985]: I0128 20:13:19.272792 4985 generic.go:334] "Generic (PLEG): container finished" podID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerID="5355598335d0d9dff197dc4d09b9b325ee69e3336b9f5be9371d1aa865456367" exitCode=143 Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.101152 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sg6vz_must-gather-9vwtc_b1ab1977-13f1-41b6-9edd-cbb936fb8485/copy/0.log" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.101861 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.256358 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b1ab1977-13f1-41b6-9edd-cbb936fb8485-must-gather-output\") pod \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.256736 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7qn6\" (UniqueName: \"kubernetes.io/projected/b1ab1977-13f1-41b6-9edd-cbb936fb8485-kube-api-access-j7qn6\") pod \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.289712 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ab1977-13f1-41b6-9edd-cbb936fb8485-kube-api-access-j7qn6" (OuterVolumeSpecName: "kube-api-access-j7qn6") pod "b1ab1977-13f1-41b6-9edd-cbb936fb8485" (UID: "b1ab1977-13f1-41b6-9edd-cbb936fb8485"). InnerVolumeSpecName "kube-api-access-j7qn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.295555 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sg6vz_must-gather-9vwtc_b1ab1977-13f1-41b6-9edd-cbb936fb8485/copy/0.log" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.303635 4985 scope.go:117] "RemoveContainer" containerID="5355598335d0d9dff197dc4d09b9b325ee69e3336b9f5be9371d1aa865456367" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.303892 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.369167 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7qn6\" (UniqueName: \"kubernetes.io/projected/b1ab1977-13f1-41b6-9edd-cbb936fb8485-kube-api-access-j7qn6\") on node \"crc\" DevicePath \"\"" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.409788 4985 scope.go:117] "RemoveContainer" containerID="0f940a9e21cc7bcb3783698fe185a88cc577a4e11e2a41301793da71c8090629" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.565039 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ab1977-13f1-41b6-9edd-cbb936fb8485-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b1ab1977-13f1-41b6-9edd-cbb936fb8485" (UID: "b1ab1977-13f1-41b6-9edd-cbb936fb8485"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.577094 4985 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b1ab1977-13f1-41b6-9edd-cbb936fb8485-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 20:13:21 crc kubenswrapper[4985]: I0128 20:13:21.278870 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" path="/var/lib/kubelet/pods/b1ab1977-13f1-41b6-9edd-cbb936fb8485/volumes" Jan 28 20:13:30 crc kubenswrapper[4985]: I0128 20:13:30.264320 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:13:30 crc kubenswrapper[4985]: E0128 20:13:30.265057 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:13:43 crc kubenswrapper[4985]: I0128 20:13:43.264614 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:13:43 crc kubenswrapper[4985]: E0128 20:13:43.266131 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:13:49 crc kubenswrapper[4985]: I0128 20:13:49.938959 4985 scope.go:117] "RemoveContainer" containerID="ae043829729a5304a684bda1750cb3b2c47fa611ecf13670e0e552bc36940e3c" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.909176 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jd6sm"] Jan 28 20:13:50 crc kubenswrapper[4985]: E0128 20:13:50.909915 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="copy" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.909948 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="copy" Jan 28 20:13:50 crc kubenswrapper[4985]: E0128 20:13:50.909986 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="gather" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.909994 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="gather" Jan 28 20:13:50 crc kubenswrapper[4985]: E0128 20:13:50.910018 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef092c5-c571-4b51-bd8d-16f348128393" containerName="container-00" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.910025 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef092c5-c571-4b51-bd8d-16f348128393" containerName="container-00" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.910236 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="copy" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.910278 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ef092c5-c571-4b51-bd8d-16f348128393" containerName="container-00" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.910298 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="gather" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.914991 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.074875 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-utilities\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.074953 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wh7t\" (UniqueName: \"kubernetes.io/projected/e9909b99-29bd-4096-a5f0-b43e54943093-kube-api-access-6wh7t\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.075449 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-catalog-content\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.099834 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd6sm"] Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.177786 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-catalog-content\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.177902 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-utilities\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.177948 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wh7t\" (UniqueName: \"kubernetes.io/projected/e9909b99-29bd-4096-a5f0-b43e54943093-kube-api-access-6wh7t\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.178832 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-utilities\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.178836 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-catalog-content\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.207378 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wh7t\" (UniqueName: \"kubernetes.io/projected/e9909b99-29bd-4096-a5f0-b43e54943093-kube-api-access-6wh7t\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.247363 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.967926 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd6sm"] Jan 28 20:13:52 crc kubenswrapper[4985]: I0128 20:13:52.752377 4985 generic.go:334] "Generic (PLEG): container finished" podID="e9909b99-29bd-4096-a5f0-b43e54943093" containerID="fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00" exitCode=0 Jan 28 20:13:52 crc kubenswrapper[4985]: I0128 20:13:52.752434 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerDied","Data":"fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00"} Jan 28 20:13:52 crc kubenswrapper[4985]: I0128 20:13:52.752664 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerStarted","Data":"f8915d028979414d1d3011e34cd62d73d66e9d07310be0513d6e50519dc6fc51"} Jan 28 20:13:52 crc kubenswrapper[4985]: I0128 20:13:52.758237 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 20:13:53 crc kubenswrapper[4985]: I0128 20:13:53.764711 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerStarted","Data":"9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f"} Jan 28 20:13:54 crc kubenswrapper[4985]: I0128 20:13:54.264333 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:13:54 crc kubenswrapper[4985]: E0128 20:13:54.264641 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:13:54 crc kubenswrapper[4985]: I0128 20:13:54.781893 4985 generic.go:334] "Generic (PLEG): container finished" podID="e9909b99-29bd-4096-a5f0-b43e54943093" containerID="9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f" exitCode=0 Jan 28 20:13:54 crc kubenswrapper[4985]: I0128 20:13:54.781943 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerDied","Data":"9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f"} Jan 28 20:13:55 crc kubenswrapper[4985]: I0128 20:13:55.797124 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerStarted","Data":"4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9"} Jan 28 20:13:55 crc kubenswrapper[4985]: I0128 20:13:55.823844 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jd6sm" podStartSLOduration=3.396652202 podStartE2EDuration="5.823823613s" podCreationTimestamp="2026-01-28 20:13:50 +0000 UTC" firstStartedPulling="2026-01-28 20:13:52.753994473 +0000 UTC m=+7243.580557294" lastFinishedPulling="2026-01-28 20:13:55.181165884 +0000 UTC m=+7246.007728705" observedRunningTime="2026-01-28 20:13:55.815603291 +0000 UTC m=+7246.642166112" watchObservedRunningTime="2026-01-28 20:13:55.823823613 +0000 UTC m=+7246.650386434" Jan 28 20:14:01 crc kubenswrapper[4985]: I0128 20:14:01.248893 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:14:01 crc kubenswrapper[4985]: I0128 20:14:01.250424 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:14:01 crc kubenswrapper[4985]: I0128 20:14:01.306088 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:14:01 crc kubenswrapper[4985]: I0128 20:14:01.952288 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:14:02 crc kubenswrapper[4985]: I0128 20:14:02.005576 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd6sm"] Jan 28 20:14:03 crc kubenswrapper[4985]: I0128 20:14:03.910590 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jd6sm" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="registry-server" containerID="cri-o://4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9" gracePeriod=2 Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.400167 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.521930 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-catalog-content\") pod \"e9909b99-29bd-4096-a5f0-b43e54943093\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.522274 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-utilities\") pod \"e9909b99-29bd-4096-a5f0-b43e54943093\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.522389 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wh7t\" (UniqueName: \"kubernetes.io/projected/e9909b99-29bd-4096-a5f0-b43e54943093-kube-api-access-6wh7t\") pod \"e9909b99-29bd-4096-a5f0-b43e54943093\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.523015 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-utilities" (OuterVolumeSpecName: "utilities") pod "e9909b99-29bd-4096-a5f0-b43e54943093" (UID: "e9909b99-29bd-4096-a5f0-b43e54943093"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.523367 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.528519 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9909b99-29bd-4096-a5f0-b43e54943093-kube-api-access-6wh7t" (OuterVolumeSpecName: "kube-api-access-6wh7t") pod "e9909b99-29bd-4096-a5f0-b43e54943093" (UID: "e9909b99-29bd-4096-a5f0-b43e54943093"). InnerVolumeSpecName "kube-api-access-6wh7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.545432 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9909b99-29bd-4096-a5f0-b43e54943093" (UID: "e9909b99-29bd-4096-a5f0-b43e54943093"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.625334 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wh7t\" (UniqueName: \"kubernetes.io/projected/e9909b99-29bd-4096-a5f0-b43e54943093-kube-api-access-6wh7t\") on node \"crc\" DevicePath \"\"" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.625624 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.936568 4985 generic.go:334] "Generic (PLEG): container finished" podID="e9909b99-29bd-4096-a5f0-b43e54943093" containerID="4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9" exitCode=0 Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.936712 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerDied","Data":"4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9"} Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.936752 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerDied","Data":"f8915d028979414d1d3011e34cd62d73d66e9d07310be0513d6e50519dc6fc51"} Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.936795 4985 scope.go:117] "RemoveContainer" containerID="4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.937322 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.982874 4985 scope.go:117] "RemoveContainer" containerID="9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.003839 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd6sm"] Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.013567 4985 scope.go:117] "RemoveContainer" containerID="fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.014753 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd6sm"] Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.087498 4985 scope.go:117] "RemoveContainer" containerID="4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9" Jan 28 20:14:05 crc kubenswrapper[4985]: E0128 20:14:05.091343 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9\": container with ID starting with 4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9 not found: ID does not exist" containerID="4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.091409 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9"} err="failed to get container status \"4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9\": rpc error: code = NotFound desc = could not find container \"4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9\": container with ID starting with 4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9 not found: ID does not exist" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.091458 4985 scope.go:117] "RemoveContainer" containerID="9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f" Jan 28 20:14:05 crc kubenswrapper[4985]: E0128 20:14:05.091931 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f\": container with ID starting with 9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f not found: ID does not exist" containerID="9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.091991 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f"} err="failed to get container status \"9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f\": rpc error: code = NotFound desc = could not find container \"9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f\": container with ID starting with 9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f not found: ID does not exist" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.092024 4985 scope.go:117] "RemoveContainer" containerID="fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00" Jan 28 20:14:05 crc kubenswrapper[4985]: E0128 20:14:05.092559 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00\": container with ID starting with fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00 not found: ID does not exist" containerID="fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.092586 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00"} err="failed to get container status \"fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00\": rpc error: code = NotFound desc = could not find container \"fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00\": container with ID starting with fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00 not found: ID does not exist" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.278598 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" path="/var/lib/kubelet/pods/e9909b99-29bd-4096-a5f0-b43e54943093/volumes" Jan 28 20:14:06 crc kubenswrapper[4985]: I0128 20:14:06.264740 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:14:06 crc kubenswrapper[4985]: E0128 20:14:06.265095 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:14:18 crc kubenswrapper[4985]: I0128 20:14:18.263634 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:14:18 crc kubenswrapper[4985]: E0128 20:14:18.264441 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:14:29 crc kubenswrapper[4985]: I0128 20:14:29.274087 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:14:29 crc kubenswrapper[4985]: E0128 20:14:29.275147 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.501300 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-729lv"] Jan 28 20:14:35 crc kubenswrapper[4985]: E0128 20:14:35.502627 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="registry-server" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.502645 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="registry-server" Jan 28 20:14:35 crc kubenswrapper[4985]: E0128 20:14:35.502662 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="extract-utilities" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.502670 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="extract-utilities" Jan 28 20:14:35 crc kubenswrapper[4985]: E0128 20:14:35.502726 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="extract-content" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.502734 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="extract-content" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.503078 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="registry-server" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.505125 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.519425 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-729lv"] Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.649466 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gmlf\" (UniqueName: \"kubernetes.io/projected/780ddc55-e0ec-4274-8221-1da02779321b-kube-api-access-7gmlf\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.650139 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-catalog-content\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.650357 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-utilities\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.752938 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-catalog-content\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.753106 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-utilities\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.753232 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gmlf\" (UniqueName: \"kubernetes.io/projected/780ddc55-e0ec-4274-8221-1da02779321b-kube-api-access-7gmlf\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.753839 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-catalog-content\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.753952 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-utilities\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.779117 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gmlf\" (UniqueName: \"kubernetes.io/projected/780ddc55-e0ec-4274-8221-1da02779321b-kube-api-access-7gmlf\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.826641 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:36 crc kubenswrapper[4985]: I0128 20:14:36.421891 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-729lv"] Jan 28 20:14:36 crc kubenswrapper[4985]: I0128 20:14:36.632809 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerStarted","Data":"9e822149934656a89cb6b96054892965dc78f52d082c19a8cb407cbcca399709"} Jan 28 20:14:37 crc kubenswrapper[4985]: I0128 20:14:37.647942 4985 generic.go:334] "Generic (PLEG): container finished" podID="780ddc55-e0ec-4274-8221-1da02779321b" containerID="51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225" exitCode=0 Jan 28 20:14:37 crc kubenswrapper[4985]: I0128 20:14:37.648031 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerDied","Data":"51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225"} Jan 28 20:14:39 crc kubenswrapper[4985]: I0128 20:14:39.684289 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerStarted","Data":"689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0"} Jan 28 20:14:43 crc kubenswrapper[4985]: I0128 20:14:43.264787 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:14:43 crc kubenswrapper[4985]: E0128 20:14:43.265765 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:14:43 crc kubenswrapper[4985]: I0128 20:14:43.732877 4985 generic.go:334] "Generic (PLEG): container finished" podID="780ddc55-e0ec-4274-8221-1da02779321b" containerID="689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0" exitCode=0 Jan 28 20:14:43 crc kubenswrapper[4985]: I0128 20:14:43.732949 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerDied","Data":"689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0"} Jan 28 20:14:44 crc kubenswrapper[4985]: I0128 20:14:44.761616 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerStarted","Data":"d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89"} Jan 28 20:14:44 crc kubenswrapper[4985]: I0128 20:14:44.802534 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-729lv" podStartSLOduration=3.305235282 podStartE2EDuration="9.802509633s" podCreationTimestamp="2026-01-28 20:14:35 +0000 UTC" firstStartedPulling="2026-01-28 20:14:37.651829269 +0000 UTC m=+7288.478392090" lastFinishedPulling="2026-01-28 20:14:44.14910362 +0000 UTC m=+7294.975666441" observedRunningTime="2026-01-28 20:14:44.787835658 +0000 UTC m=+7295.614398489" watchObservedRunningTime="2026-01-28 20:14:44.802509633 +0000 UTC m=+7295.629072464" Jan 28 20:14:45 crc kubenswrapper[4985]: I0128 20:14:45.827662 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:45 crc kubenswrapper[4985]: I0128 20:14:45.827963 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:46 crc kubenswrapper[4985]: I0128 20:14:46.894975 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-729lv" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="registry-server" probeResult="failure" output=< Jan 28 20:14:46 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:14:46 crc kubenswrapper[4985]: > Jan 28 20:14:55 crc kubenswrapper[4985]: I0128 20:14:55.888929 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:55 crc kubenswrapper[4985]: I0128 20:14:55.961289 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:56 crc kubenswrapper[4985]: I0128 20:14:56.129844 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-729lv"] Jan 28 20:14:56 crc kubenswrapper[4985]: I0128 20:14:56.264110 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:14:56 crc kubenswrapper[4985]: E0128 20:14:56.264455 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:14:56 crc kubenswrapper[4985]: I0128 20:14:56.924499 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-729lv" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="registry-server" containerID="cri-o://d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89" gracePeriod=2 Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.526427 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.655279 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-utilities\") pod \"780ddc55-e0ec-4274-8221-1da02779321b\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.655371 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gmlf\" (UniqueName: \"kubernetes.io/projected/780ddc55-e0ec-4274-8221-1da02779321b-kube-api-access-7gmlf\") pod \"780ddc55-e0ec-4274-8221-1da02779321b\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.655438 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-catalog-content\") pod \"780ddc55-e0ec-4274-8221-1da02779321b\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.659150 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-utilities" (OuterVolumeSpecName: "utilities") pod "780ddc55-e0ec-4274-8221-1da02779321b" (UID: "780ddc55-e0ec-4274-8221-1da02779321b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.662702 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/780ddc55-e0ec-4274-8221-1da02779321b-kube-api-access-7gmlf" (OuterVolumeSpecName: "kube-api-access-7gmlf") pod "780ddc55-e0ec-4274-8221-1da02779321b" (UID: "780ddc55-e0ec-4274-8221-1da02779321b"). InnerVolumeSpecName "kube-api-access-7gmlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.737747 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "780ddc55-e0ec-4274-8221-1da02779321b" (UID: "780ddc55-e0ec-4274-8221-1da02779321b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.758590 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.758634 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gmlf\" (UniqueName: \"kubernetes.io/projected/780ddc55-e0ec-4274-8221-1da02779321b-kube-api-access-7gmlf\") on node \"crc\" DevicePath \"\"" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.758654 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.943365 4985 generic.go:334] "Generic (PLEG): container finished" podID="780ddc55-e0ec-4274-8221-1da02779321b" containerID="d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89" exitCode=0 Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.943430 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerDied","Data":"d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89"} Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.943475 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerDied","Data":"9e822149934656a89cb6b96054892965dc78f52d082c19a8cb407cbcca399709"} Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.943480 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.943503 4985 scope.go:117] "RemoveContainer" containerID="d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.977403 4985 scope.go:117] "RemoveContainer" containerID="689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.995905 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-729lv"] Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.008811 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-729lv"] Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.018496 4985 scope.go:117] "RemoveContainer" containerID="51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225" Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.072263 4985 scope.go:117] "RemoveContainer" containerID="d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89" Jan 28 20:14:58 crc kubenswrapper[4985]: E0128 20:14:58.072851 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89\": container with ID starting with d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89 not found: ID does not exist" containerID="d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89" Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.072883 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89"} err="failed to get container status \"d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89\": rpc error: code = NotFound desc = could not find container \"d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89\": container with ID starting with d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89 not found: ID does not exist" Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.072905 4985 scope.go:117] "RemoveContainer" containerID="689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0" Jan 28 20:14:58 crc kubenswrapper[4985]: E0128 20:14:58.073364 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0\": container with ID starting with 689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0 not found: ID does not exist" containerID="689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0" Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.073392 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0"} err="failed to get container status \"689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0\": rpc error: code = NotFound desc = could not find container \"689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0\": container with ID starting with 689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0 not found: ID does not exist" Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.073411 4985 scope.go:117] "RemoveContainer" containerID="51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225" Jan 28 20:14:58 crc kubenswrapper[4985]: E0128 20:14:58.074099 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225\": container with ID starting with 51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225 not found: ID does not exist" containerID="51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225" Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.074135 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225"} err="failed to get container status \"51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225\": rpc error: code = NotFound desc = could not find container \"51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225\": container with ID starting with 51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225 not found: ID does not exist" Jan 28 20:14:59 crc kubenswrapper[4985]: I0128 20:14:59.296504 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="780ddc55-e0ec-4274-8221-1da02779321b" path="/var/lib/kubelet/pods/780ddc55-e0ec-4274-8221-1da02779321b/volumes" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.205816 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9"] Jan 28 20:15:00 crc kubenswrapper[4985]: E0128 20:15:00.206669 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="registry-server" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.206692 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="registry-server" Jan 28 20:15:00 crc kubenswrapper[4985]: E0128 20:15:00.206722 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="extract-content" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.206729 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="extract-content" Jan 28 20:15:00 crc kubenswrapper[4985]: E0128 20:15:00.206764 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="extract-utilities" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.206771 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="extract-utilities" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.207022 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="registry-server" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.207882 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.220126 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.220127 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.228711 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9"] Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.318635 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-secret-volume\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.318716 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z86hp\" (UniqueName: \"kubernetes.io/projected/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-kube-api-access-z86hp\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.318767 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-config-volume\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.420903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-config-volume\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.421202 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-secret-volume\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.421318 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z86hp\" (UniqueName: \"kubernetes.io/projected/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-kube-api-access-z86hp\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.422773 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-config-volume\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.433861 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-secret-volume\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.439519 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z86hp\" (UniqueName: \"kubernetes.io/projected/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-kube-api-access-z86hp\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.558861 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:01 crc kubenswrapper[4985]: I0128 20:15:01.127295 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9"] Jan 28 20:15:02 crc kubenswrapper[4985]: I0128 20:15:01.999929 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" event={"ID":"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859","Type":"ContainerStarted","Data":"bf5a9a4213bc02951c845f7cd71f23cca4531b6e3f2b011ea18daea8dd192c3f"} Jan 28 20:15:02 crc kubenswrapper[4985]: I0128 20:15:02.000281 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" event={"ID":"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859","Type":"ContainerStarted","Data":"07a79b28c6c9fd9d1d41b6b6f945c73e2fbcba9416ce92091a201cc21c261287"} Jan 28 20:15:02 crc kubenswrapper[4985]: I0128 20:15:02.029134 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" podStartSLOduration=2.029113315 podStartE2EDuration="2.029113315s" podCreationTimestamp="2026-01-28 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 20:15:02.015166269 +0000 UTC m=+7312.841729090" watchObservedRunningTime="2026-01-28 20:15:02.029113315 +0000 UTC m=+7312.855676136" Jan 28 20:15:03 crc kubenswrapper[4985]: I0128 20:15:03.014609 4985 generic.go:334] "Generic (PLEG): container finished" podID="a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" containerID="bf5a9a4213bc02951c845f7cd71f23cca4531b6e3f2b011ea18daea8dd192c3f" exitCode=0 Jan 28 20:15:03 crc kubenswrapper[4985]: I0128 20:15:03.014974 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" event={"ID":"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859","Type":"ContainerDied","Data":"bf5a9a4213bc02951c845f7cd71f23cca4531b6e3f2b011ea18daea8dd192c3f"} Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.448215 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.536963 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-secret-volume\") pod \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.537126 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-config-volume\") pod \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.537340 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z86hp\" (UniqueName: \"kubernetes.io/projected/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-kube-api-access-z86hp\") pod \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.537956 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-config-volume" (OuterVolumeSpecName: "config-volume") pod "a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" (UID: "a2fc5092-d8b4-4d2c-a57e-f0e19ebee859"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.542268 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" (UID: "a2fc5092-d8b4-4d2c-a57e-f0e19ebee859"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.542436 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-kube-api-access-z86hp" (OuterVolumeSpecName: "kube-api-access-z86hp") pod "a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" (UID: "a2fc5092-d8b4-4d2c-a57e-f0e19ebee859"). InnerVolumeSpecName "kube-api-access-z86hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.640455 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z86hp\" (UniqueName: \"kubernetes.io/projected/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-kube-api-access-z86hp\") on node \"crc\" DevicePath \"\"" Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.640489 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.640500 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 20:15:05 crc kubenswrapper[4985]: I0128 20:15:05.040229 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" event={"ID":"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859","Type":"ContainerDied","Data":"07a79b28c6c9fd9d1d41b6b6f945c73e2fbcba9416ce92091a201cc21c261287"} Jan 28 20:15:05 crc kubenswrapper[4985]: I0128 20:15:05.040571 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07a79b28c6c9fd9d1d41b6b6f945c73e2fbcba9416ce92091a201cc21c261287" Jan 28 20:15:05 crc kubenswrapper[4985]: I0128 20:15:05.040358 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:05 crc kubenswrapper[4985]: I0128 20:15:05.544282 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld"] Jan 28 20:15:05 crc kubenswrapper[4985]: I0128 20:15:05.555464 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld"] Jan 28 20:15:07 crc kubenswrapper[4985]: I0128 20:15:07.289069 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bbf5b95-eb34-48ce-970a-48eec581f83b" path="/var/lib/kubelet/pods/2bbf5b95-eb34-48ce-970a-48eec581f83b/volumes" Jan 28 20:15:09 crc kubenswrapper[4985]: I0128 20:15:09.263787 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:15:09 crc kubenswrapper[4985]: E0128 20:15:09.264376 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.643565 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-prgkh"] Jan 28 20:15:17 crc kubenswrapper[4985]: E0128 20:15:17.645358 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" containerName="collect-profiles" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.645392 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" containerName="collect-profiles" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.646077 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" containerName="collect-profiles" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.650388 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.653549 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prgkh"] Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.791339 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-utilities\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.791410 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-catalog-content\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.791570 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkpjb\" (UniqueName: \"kubernetes.io/projected/7884ef52-21c1-4085-b345-55b1c360d446-kube-api-access-kkpjb\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.894642 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-utilities\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.894723 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-catalog-content\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.894770 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkpjb\" (UniqueName: \"kubernetes.io/projected/7884ef52-21c1-4085-b345-55b1c360d446-kube-api-access-kkpjb\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.895327 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-utilities\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.895389 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-catalog-content\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.914196 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkpjb\" (UniqueName: \"kubernetes.io/projected/7884ef52-21c1-4085-b345-55b1c360d446-kube-api-access-kkpjb\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.989981 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:18 crc kubenswrapper[4985]: I0128 20:15:18.574221 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prgkh"] Jan 28 20:15:19 crc kubenswrapper[4985]: I0128 20:15:19.239176 4985 generic.go:334] "Generic (PLEG): container finished" podID="7884ef52-21c1-4085-b345-55b1c360d446" containerID="7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426" exitCode=0 Jan 28 20:15:19 crc kubenswrapper[4985]: I0128 20:15:19.239229 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerDied","Data":"7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426"} Jan 28 20:15:19 crc kubenswrapper[4985]: I0128 20:15:19.239555 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerStarted","Data":"9cd2141a28017c0e4e4224a0073cd040cbc8e4c2225c113b10d2e3d36a239263"} Jan 28 20:15:20 crc kubenswrapper[4985]: I0128 20:15:20.255126 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerStarted","Data":"58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301"} Jan 28 20:15:22 crc kubenswrapper[4985]: I0128 20:15:22.282925 4985 generic.go:334] "Generic (PLEG): container finished" podID="7884ef52-21c1-4085-b345-55b1c360d446" containerID="58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301" exitCode=0 Jan 28 20:15:22 crc kubenswrapper[4985]: I0128 20:15:22.283019 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerDied","Data":"58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301"} Jan 28 20:15:23 crc kubenswrapper[4985]: I0128 20:15:23.264228 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:15:23 crc kubenswrapper[4985]: E0128 20:15:23.264833 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:15:23 crc kubenswrapper[4985]: I0128 20:15:23.300832 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerStarted","Data":"14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34"} Jan 28 20:15:23 crc kubenswrapper[4985]: I0128 20:15:23.323422 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-prgkh" podStartSLOduration=2.895397571 podStartE2EDuration="6.323402719s" podCreationTimestamp="2026-01-28 20:15:17 +0000 UTC" firstStartedPulling="2026-01-28 20:15:19.241283448 +0000 UTC m=+7330.067846269" lastFinishedPulling="2026-01-28 20:15:22.669288596 +0000 UTC m=+7333.495851417" observedRunningTime="2026-01-28 20:15:23.320394984 +0000 UTC m=+7334.146957825" watchObservedRunningTime="2026-01-28 20:15:23.323402719 +0000 UTC m=+7334.149965540" Jan 28 20:15:27 crc kubenswrapper[4985]: I0128 20:15:27.990304 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:27 crc kubenswrapper[4985]: I0128 20:15:27.990601 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:29 crc kubenswrapper[4985]: I0128 20:15:29.040791 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-prgkh" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="registry-server" probeResult="failure" output=< Jan 28 20:15:29 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:15:29 crc kubenswrapper[4985]: > Jan 28 20:15:36 crc kubenswrapper[4985]: I0128 20:15:36.264014 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:15:36 crc kubenswrapper[4985]: E0128 20:15:36.266354 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:15:38 crc kubenswrapper[4985]: I0128 20:15:38.076277 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:38 crc kubenswrapper[4985]: I0128 20:15:38.144400 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:38 crc kubenswrapper[4985]: I0128 20:15:38.329718 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prgkh"] Jan 28 20:15:39 crc kubenswrapper[4985]: I0128 20:15:39.522223 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-prgkh" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="registry-server" containerID="cri-o://14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34" gracePeriod=2 Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.066608 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.243193 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkpjb\" (UniqueName: \"kubernetes.io/projected/7884ef52-21c1-4085-b345-55b1c360d446-kube-api-access-kkpjb\") pod \"7884ef52-21c1-4085-b345-55b1c360d446\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.243883 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-utilities\") pod \"7884ef52-21c1-4085-b345-55b1c360d446\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.244081 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-catalog-content\") pod \"7884ef52-21c1-4085-b345-55b1c360d446\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.244788 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-utilities" (OuterVolumeSpecName: "utilities") pod "7884ef52-21c1-4085-b345-55b1c360d446" (UID: "7884ef52-21c1-4085-b345-55b1c360d446"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.245293 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.249321 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7884ef52-21c1-4085-b345-55b1c360d446-kube-api-access-kkpjb" (OuterVolumeSpecName: "kube-api-access-kkpjb") pod "7884ef52-21c1-4085-b345-55b1c360d446" (UID: "7884ef52-21c1-4085-b345-55b1c360d446"). InnerVolumeSpecName "kube-api-access-kkpjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.323629 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7884ef52-21c1-4085-b345-55b1c360d446" (UID: "7884ef52-21c1-4085-b345-55b1c360d446"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.348103 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.350732 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkpjb\" (UniqueName: \"kubernetes.io/projected/7884ef52-21c1-4085-b345-55b1c360d446-kube-api-access-kkpjb\") on node \"crc\" DevicePath \"\"" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.534473 4985 generic.go:334] "Generic (PLEG): container finished" podID="7884ef52-21c1-4085-b345-55b1c360d446" containerID="14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34" exitCode=0 Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.534522 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerDied","Data":"14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34"} Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.534528 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.534548 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerDied","Data":"9cd2141a28017c0e4e4224a0073cd040cbc8e4c2225c113b10d2e3d36a239263"} Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.534565 4985 scope.go:117] "RemoveContainer" containerID="14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.558768 4985 scope.go:117] "RemoveContainer" containerID="58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.590036 4985 scope.go:117] "RemoveContainer" containerID="7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.592622 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prgkh"] Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.602399 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-prgkh"] Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.630901 4985 scope.go:117] "RemoveContainer" containerID="14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34" Jan 28 20:15:40 crc kubenswrapper[4985]: E0128 20:15:40.631293 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34\": container with ID starting with 14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34 not found: ID does not exist" containerID="14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.631327 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34"} err="failed to get container status \"14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34\": rpc error: code = NotFound desc = could not find container \"14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34\": container with ID starting with 14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34 not found: ID does not exist" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.631355 4985 scope.go:117] "RemoveContainer" containerID="58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301" Jan 28 20:15:40 crc kubenswrapper[4985]: E0128 20:15:40.632058 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301\": container with ID starting with 58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301 not found: ID does not exist" containerID="58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.632082 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301"} err="failed to get container status \"58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301\": rpc error: code = NotFound desc = could not find container \"58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301\": container with ID starting with 58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301 not found: ID does not exist" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.632101 4985 scope.go:117] "RemoveContainer" containerID="7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426" Jan 28 20:15:40 crc kubenswrapper[4985]: E0128 20:15:40.632324 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426\": container with ID starting with 7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426 not found: ID does not exist" containerID="7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.632354 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426"} err="failed to get container status \"7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426\": rpc error: code = NotFound desc = could not find container \"7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426\": container with ID starting with 7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426 not found: ID does not exist" Jan 28 20:15:41 crc kubenswrapper[4985]: I0128 20:15:41.280493 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7884ef52-21c1-4085-b345-55b1c360d446" path="/var/lib/kubelet/pods/7884ef52-21c1-4085-b345-55b1c360d446/volumes" Jan 28 20:15:50 crc kubenswrapper[4985]: I0128 20:15:50.081785 4985 scope.go:117] "RemoveContainer" containerID="6c8e48c972aa2e298f7430451a2f30fabf8f72218697856b1aa3451401eef4e3" Jan 28 20:15:51 crc kubenswrapper[4985]: I0128 20:15:51.275162 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:15:51 crc kubenswrapper[4985]: E0128 20:15:51.275892 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:16:03 crc kubenswrapper[4985]: I0128 20:16:03.264325 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:16:03 crc kubenswrapper[4985]: E0128 20:16:03.265276 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:16:14 crc kubenswrapper[4985]: I0128 20:16:14.265863 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:16:15 crc kubenswrapper[4985]: I0128 20:16:15.019452 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"ccb4b242faf2f155289f8c78cfbb83c60584760e0e0e839f8fc517c62011675e"} Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.477402 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gjw48"] Jan 28 20:16:56 crc kubenswrapper[4985]: E0128 20:16:56.482939 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="registry-server" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.483053 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="registry-server" Jan 28 20:16:56 crc kubenswrapper[4985]: E0128 20:16:56.483141 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="extract-utilities" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.483221 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="extract-utilities" Jan 28 20:16:56 crc kubenswrapper[4985]: E0128 20:16:56.483355 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="extract-content" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.483431 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="extract-content" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.483805 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="registry-server" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.485972 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.496272 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gjw48"] Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.599614 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-utilities\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.600141 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v2hg\" (UniqueName: \"kubernetes.io/projected/14330adf-7291-4226-8936-5d853944f1a3-kube-api-access-7v2hg\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.600419 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-catalog-content\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.702603 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-utilities\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.702749 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v2hg\" (UniqueName: \"kubernetes.io/projected/14330adf-7291-4226-8936-5d853944f1a3-kube-api-access-7v2hg\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.702799 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-catalog-content\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.708581 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-utilities\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.709851 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-catalog-content\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.741542 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v2hg\" (UniqueName: \"kubernetes.io/projected/14330adf-7291-4226-8936-5d853944f1a3-kube-api-access-7v2hg\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.829980 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:57 crc kubenswrapper[4985]: I0128 20:16:57.395948 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gjw48"] Jan 28 20:16:57 crc kubenswrapper[4985]: I0128 20:16:57.669162 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerStarted","Data":"d0c828507998153509cb8a317ad848048d3eada54492a5c445052c355affa924"} Jan 28 20:16:58 crc kubenswrapper[4985]: I0128 20:16:58.684964 4985 generic.go:334] "Generic (PLEG): container finished" podID="14330adf-7291-4226-8936-5d853944f1a3" containerID="c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c" exitCode=0 Jan 28 20:16:58 crc kubenswrapper[4985]: I0128 20:16:58.685038 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerDied","Data":"c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c"} Jan 28 20:16:59 crc kubenswrapper[4985]: I0128 20:16:59.699400 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerStarted","Data":"930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175"} Jan 28 20:17:06 crc kubenswrapper[4985]: I0128 20:17:06.797715 4985 generic.go:334] "Generic (PLEG): container finished" podID="14330adf-7291-4226-8936-5d853944f1a3" containerID="930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175" exitCode=0 Jan 28 20:17:06 crc kubenswrapper[4985]: I0128 20:17:06.797790 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerDied","Data":"930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175"} Jan 28 20:17:07 crc kubenswrapper[4985]: I0128 20:17:07.816323 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerStarted","Data":"a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42"} Jan 28 20:17:07 crc kubenswrapper[4985]: I0128 20:17:07.844623 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gjw48" podStartSLOduration=3.214362572 podStartE2EDuration="11.844599337s" podCreationTimestamp="2026-01-28 20:16:56 +0000 UTC" firstStartedPulling="2026-01-28 20:16:58.688038849 +0000 UTC m=+7429.514601680" lastFinishedPulling="2026-01-28 20:17:07.318275624 +0000 UTC m=+7438.144838445" observedRunningTime="2026-01-28 20:17:07.83834034 +0000 UTC m=+7438.664903201" watchObservedRunningTime="2026-01-28 20:17:07.844599337 +0000 UTC m=+7438.671162168" Jan 28 20:17:16 crc kubenswrapper[4985]: I0128 20:17:16.830483 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:17:16 crc kubenswrapper[4985]: I0128 20:17:16.831094 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:17:17 crc kubenswrapper[4985]: I0128 20:17:17.889372 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gjw48" podUID="14330adf-7291-4226-8936-5d853944f1a3" containerName="registry-server" probeResult="failure" output=< Jan 28 20:17:17 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:17:17 crc kubenswrapper[4985]: > Jan 28 20:17:27 crc kubenswrapper[4985]: I0128 20:17:27.932021 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gjw48" podUID="14330adf-7291-4226-8936-5d853944f1a3" containerName="registry-server" probeResult="failure" output=< Jan 28 20:17:27 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:17:27 crc kubenswrapper[4985]: > Jan 28 20:17:37 crc kubenswrapper[4985]: I0128 20:17:37.897845 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gjw48" podUID="14330adf-7291-4226-8936-5d853944f1a3" containerName="registry-server" probeResult="failure" output=< Jan 28 20:17:37 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:17:37 crc kubenswrapper[4985]: > Jan 28 20:17:46 crc kubenswrapper[4985]: I0128 20:17:46.909202 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:17:46 crc kubenswrapper[4985]: I0128 20:17:46.996091 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:17:47 crc kubenswrapper[4985]: I0128 20:17:47.163715 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gjw48"] Jan 28 20:17:48 crc kubenswrapper[4985]: I0128 20:17:48.347299 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gjw48" podUID="14330adf-7291-4226-8936-5d853944f1a3" containerName="registry-server" containerID="cri-o://a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42" gracePeriod=2 Jan 28 20:17:48 crc kubenswrapper[4985]: I0128 20:17:48.901893 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:17:48 crc kubenswrapper[4985]: I0128 20:17:48.990431 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-utilities\") pod \"14330adf-7291-4226-8936-5d853944f1a3\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " Jan 28 20:17:48 crc kubenswrapper[4985]: I0128 20:17:48.990618 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v2hg\" (UniqueName: \"kubernetes.io/projected/14330adf-7291-4226-8936-5d853944f1a3-kube-api-access-7v2hg\") pod \"14330adf-7291-4226-8936-5d853944f1a3\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " Jan 28 20:17:48 crc kubenswrapper[4985]: I0128 20:17:48.990832 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-catalog-content\") pod \"14330adf-7291-4226-8936-5d853944f1a3\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " Jan 28 20:17:48 crc kubenswrapper[4985]: I0128 20:17:48.992325 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-utilities" (OuterVolumeSpecName: "utilities") pod "14330adf-7291-4226-8936-5d853944f1a3" (UID: "14330adf-7291-4226-8936-5d853944f1a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.010323 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14330adf-7291-4226-8936-5d853944f1a3-kube-api-access-7v2hg" (OuterVolumeSpecName: "kube-api-access-7v2hg") pod "14330adf-7291-4226-8936-5d853944f1a3" (UID: "14330adf-7291-4226-8936-5d853944f1a3"). InnerVolumeSpecName "kube-api-access-7v2hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.095182 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.095240 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v2hg\" (UniqueName: \"kubernetes.io/projected/14330adf-7291-4226-8936-5d853944f1a3-kube-api-access-7v2hg\") on node \"crc\" DevicePath \"\"" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.148733 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14330adf-7291-4226-8936-5d853944f1a3" (UID: "14330adf-7291-4226-8936-5d853944f1a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.197367 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.407516 4985 generic.go:334] "Generic (PLEG): container finished" podID="14330adf-7291-4226-8936-5d853944f1a3" containerID="a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42" exitCode=0 Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.407615 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerDied","Data":"a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42"} Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.407675 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerDied","Data":"d0c828507998153509cb8a317ad848048d3eada54492a5c445052c355affa924"} Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.407715 4985 scope.go:117] "RemoveContainer" containerID="a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.407895 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.457851 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gjw48"] Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.470191 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gjw48"] Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.472086 4985 scope.go:117] "RemoveContainer" containerID="930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.510376 4985 scope.go:117] "RemoveContainer" containerID="c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.560453 4985 scope.go:117] "RemoveContainer" containerID="a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42" Jan 28 20:17:49 crc kubenswrapper[4985]: E0128 20:17:49.560957 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42\": container with ID starting with a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42 not found: ID does not exist" containerID="a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.560987 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42"} err="failed to get container status \"a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42\": rpc error: code = NotFound desc = could not find container \"a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42\": container with ID starting with a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42 not found: ID does not exist" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.561010 4985 scope.go:117] "RemoveContainer" containerID="930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175" Jan 28 20:17:49 crc kubenswrapper[4985]: E0128 20:17:49.561452 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175\": container with ID starting with 930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175 not found: ID does not exist" containerID="930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.561476 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175"} err="failed to get container status \"930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175\": rpc error: code = NotFound desc = could not find container \"930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175\": container with ID starting with 930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175 not found: ID does not exist" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.561488 4985 scope.go:117] "RemoveContainer" containerID="c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c" Jan 28 20:17:49 crc kubenswrapper[4985]: E0128 20:17:49.561824 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c\": container with ID starting with c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c not found: ID does not exist" containerID="c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.561844 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c"} err="failed to get container status \"c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c\": rpc error: code = NotFound desc = could not find container \"c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c\": container with ID starting with c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c not found: ID does not exist" Jan 28 20:17:51 crc kubenswrapper[4985]: I0128 20:17:51.286365 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14330adf-7291-4226-8936-5d853944f1a3" path="/var/lib/kubelet/pods/14330adf-7291-4226-8936-5d853944f1a3/volumes" Jan 28 20:18:41 crc kubenswrapper[4985]: I0128 20:18:41.186573 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:18:41 crc kubenswrapper[4985]: I0128 20:18:41.188409 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136467456024466 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136467457017404 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136450317016512 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136450317015462 5ustar corecore